How to Assess Your M&E Team's AI Readiness
Most M&E teams jump to AI tools before knowing if they're ready. A 20-minute self-assessment across 5 dimensions tells you where to invest first and what to skip.
AI readiness is not about technology. It is about whether your data foundations, governance structures, team skills, and decision processes can support AI-assisted workflows without creating new risks. Most M&E teams score well on curiosity but poorly on the foundations that make AI tools actually useful.
The 5-Level AI Readiness Ladder
Adapted from MITRE, Microsoft, and UK Government maturity models for M&E functions. Assess your team on each dimension separately. A single average score hides the gaps that matter.
Level 1: Ad Hoc
Individual team members experiment with ChatGPT or similar tools for isolated tasks. No shared guidelines, no data governance review, no documentation of what AI was used for. If the person using AI leaves, the capability leaves with them.
Level 2: Repeatable Pilots
The team has run 1-2 structured pilots with clear objectives and success criteria. Basic guardrails exist (e.g., "don't upload participant data"). Someone has thought about which use cases make sense. Results are documented but not yet standardized.
Level 3: Defined & Operational
Standard processes exist for AI-assisted tasks: when to use AI, what data is allowed, how outputs are validated, and how AI use is disclosed in reports. Governance is documented. Multiple team members can use the same workflows consistently.
Level 4: Scaled & Managed
AI-assisted workflows operate across the M&E cycle (data collection, analysis, reporting). Quality is monitored. Risks are managed proactively. The team can demonstrate efficiency gains and quality improvements with evidence, not anecdotes.
Level 5: Optimized
AI is embedded in planning and accountability. Continuous improvement cycles refine prompts, workflows, and validation processes. The team contributes to organizational AI governance. Value is demonstrable and trusted by stakeholders.
Common Readiness Mistakes
These patterns repeat across organizations. The "ready" version is not about having more technology. It is about having the foundations in place.
Starting an AI Pilot
"Let's all start using ChatGPT for our reports." No assessment of data maturity, no governance review, no success criteria. Three months later, half the team abandoned it because outputs were unreliable, and nobody knows which reports used AI.
Starting an AI Pilot
"Before piloting AI for report drafting, let's assess: Is our data clean enough for AI to work with? Do we have guidelines for what data can be shared? How will we validate outputs? What does success look like after 3 months?" Then pilot with one use case and measure.
Claiming AI Readiness
"We're AI-ready because we bought Copilot licenses for the team." Technology procurement without data readiness is like buying a car before building the road. The tools sit unused or produce unreliable results because underlying data quality, governance, and skills are not there.
Claiming AI Readiness
"We scored Level 2 on data maturity, Level 1 on governance, and Level 3 on team skills. Our priority is data governance before we scale AI use. We'll reassess in 6 months." Dimension-by-dimension scoring reveals where to invest.
Assessment Design
"Rate your organization's AI maturity on a scale of 1-10." One number tells you nothing actionable. It conflates data quality, governance, skills, and technology into a single meaningless score.
Assessment Design
"Score each dimension (data, governance, skills, tools, value) on the 5-level ladder with evidence for each rating. Where are the gaps between dimensions?" Disaggregated scoring shows that your data maturity at Level 2 is bottlenecking your Level 3 technology investments.
5 Rules for Honest Readiness Assessment
Assess data maturity before AI maturity
AI readiness fails most often because data foundations are weak: inconsistent formats, missing metadata, poor quality controls. The UK Data Maturity Framework covers this in 10 topics. If your data maturity is Level 1-2, AI tools will amplify problems, not solve them.
Include governance as a dimension, not an afterthought
Every major maturity model (MITRE, Microsoft, World Bank) includes governance as a core pillar. "Can we use AI?" is a governance question before it is a technology question. Assess: Do you have usage guidelines? Data sharing rules? Disclosure requirements?
Make assessments repeatable
The best maturity frameworks (World Bank DGRA, Data Orchard) are designed for annual reassessment. Run your readiness check every 6-12 months. Track movement across dimensions over time, not just a snapshot.
Score by dimension, never by average
Microsoft's Responsible AI Maturity Model explicitly warns against averaging scores across dimensions. A team at Level 4 on skills and Level 1 on governance has a governance problem, not a "Level 2.5" maturity. Disaggregate.
Start with use cases, not tools
The question is not "Are we ready for AI?" It is "Are we ready for AI-assisted qualitative coding?" or "Are we ready for AI-assisted indicator monitoring?" Readiness varies by use case. Assess against specific workflows you actually want to improve.
AI Readiness Self-Assessment Prompt
Use this prompt with any AI tool to generate a structured readiness assessment for your M&E team. Fill in the bracketed fields.
I need you to help me assess my M&E team's readiness to adopt AI tools. Generate a structured self-assessment with scoring guidance. Context: - Organization type: [NGO / UN agency / government / consulting firm] - Team size: [number] M&E staff - Current AI use: [none / informal experimentation / some structured use] - Primary M&E activities: [data collection, analysis, reporting, evaluation, etc.] - Key donor(s): [USAID / FCDO / EU / World Bank / UN / other] For each of the following 5 dimensions, provide: 1. A description of what Levels 1-5 look like for this specific dimension 2. Three diagnostic questions to help me identify our current level 3. One priority action to move from our likely current level to the next Dimensions: 1. Data Foundations (quality, accessibility, metadata, interoperability) 2. Governance & Risk (usage policies, data protection, disclosure, procurement) 3. Team Skills & Culture (AI literacy, willingness, training, collaboration) 4. Tools & Infrastructure (available platforms, integration with existing systems) 5. Value & Decision Integration (whether AI outputs actually improve decisions) Format as a table for each dimension with columns: Level | Description | Diagnostic Questions | Priority Action.
Build on Your Assessment
Once you know your readiness level, use our governance guide to address gaps and our free tools to start structured AI pilots.
Related Quick Guides
How to Build AI Governance for M&E
The 6-point governance framework every M&E team needs before scaling AI use.
Read guideHow to Protect Data Privacy When Using AI
What's safe to share and what to remove before using any AI tool.
Read guideHow to Choose the Right AI Tool for M&E
ChatGPT vs Claude vs Gemini: which to use for which M&E task.
Read guide