Skip to main content
M&E Studio
Home
AI for M&E
GuidesPromptsPlugins
Resources
Indicator LibraryReference LibraryTopic GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • Insights
  • LinkedIn

Services

  • Our Services

AI for M&E

  • Guides
  • Prompts
  • Plugins
  • Insights

Resources

  • Indicator Library
  • Reference Library
  • Downloads
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Decision Guides
  4. /
  5. Surveys vs Interviews vs Focus Groups

Surveys vs Interviews vs Focus Groups

The three most common M&E data collection methods, compared. Surveys tell you how many, interviews tell you why, focus groups tell you what people agree on.

At a Glance

SurveyKey Informant Interview (KII)Focus Group Discussion (FGD)
What it measuresPrevalence, scale, trends (how many, how much)Individual perspectives, expert knowledge (why, how)Shared experiences, social norms, group consensus (what do people agree on)
Sample size200-2,000+ (statistical)12-30 (purposive)4-8 groups, 6-10 people each
FormatStructured questionnaire (closed + some open questions)Semi-structured guide (open questions, flexible flow)Facilitated group discussion (guided by topic list)
Duration per session20-45 minutes45-90 minutes60-120 minutes
Cost per respondent$5-25 (depending on method and context)$30-80 (transcription, skilled interviewer)$15-40 per participant (facilitator, venue, refreshments)
Data typeQuantitative (primarily)QualitativeQualitative
Can generalize?Yes (with probability sampling)No (informant selection is purposive)No (participants are not representative)
Sensitive topics?Limited (social desirability bias)Good (private, one-on-one)Poor (group setting inhibits disclosure)

When to Use Each

Surveys

Best for:

  • Baseline and endline measurement of indicators
  • Measuring the prevalence of behaviors, practices, or conditions across a population
  • Comparing groups (treatment vs control, men vs women, regions)
  • Tracking changes over time with standardized measures
  • Donor reporting (most indicators require quantitative data)

Not suitable for:

  • Understanding complex motivations or experiences
  • Exploring topics you do not know enough about to write good questions
  • Sensitive topics where respondents may not answer honestly
  • Very small populations (under 50 people; just interview them)

Practical considerations:

  • You need a sampling frame: a list of the target population or a method to create one. Use a sampling calculator to determine how many respondents you actually need.
  • Good survey design takes time and requires pre-testing. Do not skip this.
  • Enumerators need training (1-3 days minimum).
  • Mobile data collection platforms like KoboToolbox, ODK, or SurveyCTO reduce data entry errors and speed up analysis.
  • Always budget for non-response. Sample 15-20% more than your target.

Key Informant Interviews (KIIs)

Best for:

  • Understanding "why" behind quantitative findings
  • Capturing expert or stakeholder perspectives (program managers, community leaders, government officials)
  • Exploring sensitive topics (gender-based violence, corruption, political dynamics) where anonymity and privacy matter
  • Investigating program processes and implementation challenges
  • Early program design when you do not know enough to write survey questions

Not suitable for:

  • Measuring prevalence or scale (you cannot say "60% of beneficiaries" from 15 interviews)
  • Comparing groups statistically
  • Topics where individual perspective is less important than social dynamics (use FGDs instead)

Practical considerations:

  • Interviewer quality matters enormously. A poor interviewer gets shallow data regardless of the guide. Invest in hiring or training the right person.
  • Record and transcribe key informant interviews (with consent) for systematic analysis.
  • Interviews should be semi-structured: have a guide with key questions, but let the conversation go where it needs to.
  • Typically 12-20 interviews for a single stakeholder group. Saturation usually occurs around interview 12-16.
  • Budget 2-4 hours per interview (including travel, setup, debrief, and notes).

Focus Group Discussions (FGDs)

Best for:

  • Understanding shared experiences and social norms ("What does the community think about...")
  • Exploring areas of agreement and disagreement among a group
  • Generating ideas for program design (participatory approaches)
  • Understanding how groups make collective decisions
  • Reaching many perspectives efficiently (8 people in 90 minutes vs 8 individual interviews taking 8+ hours)

Not suitable for:

  • Sensitive or stigmatized topics (participants may not speak openly in a group)
  • Topics where power dynamics will silence some participants (mixing managers with frontline staff, mixing community leaders with ordinary members)
  • Getting individual-level data (FGD data represents group dynamics, not individual views)
  • Situations where confidentiality is critical

Practical considerations:

  • Separate focus group discussions by relevant characteristics: men and women separately, different age groups separately, different geographic areas.
  • 6-10 participants per group is ideal. Fewer than 5 limits discussion. More than 10 makes moderation difficult.
  • A skilled facilitator is essential. Facilitation is a different skill from interviewing.
  • Always have a note-taker in addition to the facilitator.
  • Record with consent. Transcription is time-consuming but necessary for rigorous analysis.
  • Offer refreshments but avoid payments that could coerce participation.

Remote and Digital Alternatives

Post-COVID, remote data collection is now standard practice. Phone surveys work well for short questionnaires (under 20 minutes) with populations that have reliable phone access. Online surveys reach urban, connected populations quickly but introduce severe coverage bias in rural or low-income settings. Remote KIIs via video call produce data nearly as rich as in-person interviews, provided the informant has a stable connection and a private space. Remote FGDs are the hardest to pull off: group dynamics suffer on video, dominant voices take over more easily, and connectivity drops disrupt the conversation. Use remote FGDs only when in-person is genuinely impossible, and cap group size at 6. Whatever mode you choose, do not mix remote and in-person respondents within the same data collection exercise. The mode affects responses, and mixing introduces bias you cannot control for.

Combining Methods

Most evaluations use two or three of these methods together. Common combinations:

Survey + KIIs (most common) The survey provides the numbers. The interviews explain them. Run the survey first to identify patterns, then interview key informants to understand why those patterns exist. For example, if your survey shows that only 35% of trained farmers adopted a new technique, KIIs with agricultural extension workers and non-adopting farmers will tell you whether the barrier is knowledge, cost, land tenure, or something else entirely.

Survey + FGDs Similar logic, but FGDs capture community-level perspectives and social dynamics that individual interviews miss. This combination works best when program outcomes depend on collective behavior: community-led total sanitation, village savings groups, or natural resource management committees. The survey tells you adoption rates. The FGDs tell you what social pressures or group norms are driving those rates.

All three For comprehensive evaluations, use all three. The survey measures indicators. KIIs capture expert and stakeholder perspectives. FGDs capture community-level norms and experiences. This is the standard approach for most mid-term and final evaluations. Plan the sequencing deliberately: qualitative methods first if you are exploring, survey first if you are testing hypotheses.

Consider supplementing any combination with direct observation methods when you need to verify self-reported behavior against what actually happens. Self-reports of handwashing, latrine use, or agricultural practices are notoriously unreliable. Observation adds a layer of verification that strengthens your findings.

Sector Examples

Nutrition program measuring dietary diversity. You need a household survey (n=380+) to quantify the prevalence of minimum dietary diversity among women of reproductive age. That gives you the indicator. But the number alone does not explain why dietary diversity is low. Add 15 KIIs with health workers and community nutrition volunteers to identify barriers to dietary change, such as market access, seasonal availability, or household decision-making about food purchases.

Youth livelihoods program assessing skills training outcomes. Survey 500 graduates with a structured questionnaire covering employment status, income changes, and skills application six months post-training. Then run 6 FGDs (3 male, 3 female) to understand how graduates experience the job market, what barriers they face, and whether the training content matched actual employer expectations. The FGDs will surface patterns the survey cannot capture, like social stigma around certain trades, the role of peer networks in finding work, or gendered differences in access to startup capital.

Cost Comparison

These estimates reflect a typical program evaluation in a low-income country. Actual costs shift based on geographic accessibility, language requirements, and whether you use internal staff or external consultants. They give you a reliable planning baseline for budgeting.

ComponentSurvey (400 HH)KIIs (20)FGDs (8 groups)
Design$2,000-4,000$500-1,000$500-1,000
Training$1,500-3,000$500-1,000$500-1,000
Data collection$6,000-15,000$2,000-4,000$2,000-3,000
Data entry/transcription$1,000-2,000$1,500-3,000$1,500-2,500
Analysis$2,000-5,000$2,000-4,000$1,500-3,000
Total estimate$12,500-29,000$6,500-13,000$6,000-10,500

The biggest cost driver for surveys is geography. Dispersed rural populations with poor road access can double or triple data collection costs. For KIIs and FGDs, the biggest cost driver is transcription and translation. Budget for both from the start.

Common Mistakes

Mistake 1: Using surveys for everything. If your evaluation question is "why aren't farmers adopting new techniques?" a survey with closed-ended questions will give you a list of checked boxes. An interview will give you the real reasons, including ones you did not think to put on the survey.

Mistake 2: Mixing power dynamics in focus groups. Never put program staff and beneficiaries in the same FGD. Never mix community leaders with ordinary community members for sensitive topics. Group composition determines what people will say.

Mistake 3: Leading questions in interviews. "Don't you think the program has been helpful?" is not a research question. Use open-ended, neutral questions: "How has your experience been with the program?"

Mistake 4: Too many survey questions. A 90-minute household survey causes respondent fatigue, and data quality collapses after 30-40 minutes. Keep surveys under 45 minutes. Cut mercilessly. If you cannot explain why you need a question and how you will use the answer, remove it. The Survey Builder can help you structure and trim your questionnaire.

Mistake 5: Not pre-testing. Every survey, interview guide, and FGD guide should be pre-tested with people similar to your actual respondents. Pre-testing catches confusing questions, translation errors, and unrealistic time estimates. Budget 2-3 days for pre-testing and revision.

Mistake 6: Treating FGD quotes as individual data. "Participants said they prefer the new seeds" does not mean each individual prefers them. It means the group discussion produced that statement. Some individuals may have disagreed but stayed silent. Report FGD findings as group-level perspectives, not individual views.

Decision Guide

  1. "I need to report on my indicators to the donor." You need a survey. Most indicator reporting requires quantitative data with defined numerators and denominators.

  2. "I need to understand why something is or isn't working." You need KIIs and/or FGDs. Interviews for individual perspectives and expert knowledge. FGDs for community-level dynamics.

  3. "I'm designing a new program and don't know what questions to ask." Start with KIIs and FGDs to explore the context, then design a survey based on what you learn.

  4. "I'm doing a final evaluation." You probably need all three: a survey for indicator measurement, KIIs with stakeholders and program staff, and FGDs with beneficiary groups.

  5. "Budget is very tight." If you can only afford one method, choose based on your primary question. Need numbers? Survey. Need understanding? KIIs. If you can afford two, the survey + KII combination gives you the most versatile evidence base.

Frequently Asked Questions

PreviousQualitative vs Quantitative vs Mixed MethodsNextCluster Sampling vs Stratified Sampling