Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Baseline Design
Core ConceptData Collection11 min read

Baseline Design

A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.

When to Use

Baseline design is the right approach when you need to establish a clear starting point for measuring change. Use it when:

  • Setting indicator targets: Design teams often lack adequate information to confidently propose specific targets during proposal writing. Instead, indicator targets should be set upon completion of a project baseline study and included in the program's Indicator Plan.
  • Measuring program impact: You need to compare endline conditions against initial conditions to assess whether your intervention produced change.
  • Informing project adaptation: You want baseline data that directly informs critical decisions during implementation, not just donor reporting.
  • Meeting donor requirements: Donors like USAID, CRS, and FCDO require documentation of baseline values in the Indicator Performance Tracking Table (IPTT) or donor equivalent.
  • Longer-term projects: For projects 18+ months, conduct a baseline in the first 6 months; for shorter projects, conduct baseline prior to start or no later than 3 months into implementation.

Baseline design is less useful when:

  • Projects are very short: Baseline values documentation is not required for projects less than three years in duration.
  • You only need routine monitoring: For ongoing activity tracking, routine monitoring data collection follows a different workflow.
  • Rapid emergency response: For immediate emergency response, a rapid needs assessment with a different, faster methodology is required.
ScenarioUse Baseline Design?Better Alternative
New project measuring change over timeYes—
Setting indicator targetsYes—
Short-term project (<3 years)OptionalRoutine monitoring
Emergency rapid assessmentNoRapid Needs Assessment
Post-distribution monitoringNoPDM workflow
Understanding unpredicted outcomesNoOutcome Harvesting

How It Works

Baseline design follows several core principles that distinguish it from other data collection activities:

Decision-utility first. Every data point collected must serve a specific, identified decision. If you cannot articulate which specific project decision each indicator will inform, question its necessity. This principle prevents the common dysfunction of data-rich but insight-poor baselines that collect excessive, non-essential data.

Count the burden. Collecting data is not free. Every indicator added to a baseline survey represents costs in terms of time, money, and human resources. Assess the burden (time, cost, complexity, participant fatigue) of collecting each data point. If the burden outweighs the value derived from informing a critical decision, challenge or discard the indicator. This is a core application of the burden-consciousness principle.

Trace to structure. Ensure data collection aligns with project logic, defined indicators, and any relevant structural requirements (e.g., donor modules). The baseline should support subsequent analysis and reporting, not create data that is difficult to code and analyze.

Comparability is key. Design baseline and endline methods to be as identical as possible to enable valid comparisons. Document any necessary deviations. If baseline and endline use different methodologies, tools, or definitions, comparisons become meaningless.

Baseline and endline are not evaluations. Baseline and endline studies are not evaluations themselves, but an important part of assessing change. They usually contribute to project/programme evaluation (e.g., a final or impact evaluation), but can also contribute to monitoring changes on longer-term projects/programmes.

Key Components

A well-constructed baseline design includes these essential elements:

  • Decision identification: A documented, prioritized list of key project decisions that the baseline data will inform. This moves beyond broad categories to precise questions about what information is needed for each decision.
  • Indicator selection: A lean, documented list of essential, decision-linked indicators. Each selected indicator should be directly linked to a prioritized decision, with the burden of collecting data for each indicator demonstrably justified by its decision-relevance.
  • Data collection methodology: The most appropriate and cost-effective methods for collecting each data point (e.g., household surveys, key informant interviews, focus group discussions, direct observation, secondary data review). Methods should be clear, concise, unambiguous, and culturally sensitive.
  • Sampling strategy: A representative sampling frame with appropriate sample size calculations, respondent selection procedures, and geographic coverage. For indicators requiring statistically sound data, ensure representative sampling and confidence intervals.
  • Data collection tools: Draft tools (questionnaires, interview guides, checklists) that are clear, concise, easy to administer, and minimally burdensome for respondents and enumerators.
  • Data management protocols: How data will be collected, entered, cleaned, stored, and secured. Specify data formats, validation rules, and quality checks.
  • Quality assurance plan: Procedures for enumerator training, supervision, data verification, and spot-checking to ensure data accuracy and reliability.
  • Informed consent procedures: Clear, simple, and culturally appropriate informed consent forms and verbal consent scripts that explain the purpose of data collection, how data will be used, confidentiality measures, potential risks and benefits, and the participant's right to refuse or withdraw at any time without penalty.
  • Ethical approval: Formal ethical clearance from the designated review body before data collection begins.

Best Practices

Collect baseline data at the right time. Baseline data should be collected at the very beginning of a project or as soon after the beginning as possible. For longer projects (18+ months), conduct a baseline in the first 6 months; for shorter projects, conduct baseline prior to the start or no later than 3 months into implementation.

Link every indicator to a decision. For each proposed baseline indicator, ask: "If this data shows X, what decision will we make differently?" If you cannot answer this, the indicator is likely a candidate for removal or revision. This ensures the baseline serves as a dynamic tool for adaptation, not just a historical record.

Include baseline values or plans for all indicators. You must include baseline values or plans for baseline for indicators needing it. Just as every outcome needs at least one performance indicator, every indicator needs a target, and each target needs a baseline.

Incorporate baseline plans in project documents. Plans for Baseline need to be incorporated in the Donor Investment Plan (DIP), reflected in the Indicator Performance Tracking Table (IPTT), and clearly linked to indicators in the MEAL Plan that require baseline.

Set targets after baseline completion. Design teams will often not have adequate information to confidently propose specific targets. Instead, indicator targets should be set upon completion of a project baseline study and included in the program's Indicator Plan. Upon completion of a project baseline assessment, project decision makers set indicator targets. These targets should be informed by baseline results, the project timeline, human and financial resources dedicated to the project, and the permissiveness or difficulty of the context, including levels of uncertainty.

Select appropriate data collection methods. Select baseline data collection methods that align with and support the overall evaluation design, are appropriate for the given indicators, and meet donor requirements. Methods should be the most efficient and effective for gathering the required data while minimizing respondent and enumerator burden.

Document baseline values properly. Once the baseline value for an indicator has been determined, record the value and date collected in the 'baseline value' column of the Indicator Plan. Projects should document the baseline values in the Indicator Performance Tracking Table, or IPTT, or donor equivalent.

Collect baseline information before project activities begin. Baseline information for all indicators must be measured and reported prior to the start of project activities. This ensures you have a true starting point for measuring change.

Common Mistakes

Collecting data without decision utility. The most common failure is designing a baseline that collects data solely for donor reporting without a clear plan for how it will inform decisions. When every data point cannot be linked to a specific project decision, you've created a burden of excess that wastes resources.

Using different methodologies at baseline and endline. If baseline and endline use different methods, tools, or definitions, comparisons become meaningless. Design both time points to be as identical as possible to enable valid comparisons. Document any necessary deviations.

Skipping pilot testing. Rushing to deploy without pilot testing leads to major, costly errors in the field. A pilot test should systematically identify issues with tool clarity, flow, length, and cultural appropriateness before full deployment.

Underestimating ethical review timelines. Ethical review timelines can vary significantly and may require proactive follow-up. Underestimating the time required for review and approval leads to project schedule slippage.

Collecting baseline after project start. For projects where baseline is conducted after implementation has begun, you've already lost the ability to measure true initial conditions. Baseline information must be measured prior to the start of project activities.

Not documenting baseline values. Once baseline values are determined, they must be recorded in the Indicator Plan. Without proper documentation, you cannot track progress against targets or demonstrate change to donors.

Treating baseline as an evaluation. Baseline and endline studies are not evaluations themselves. They are important parts of assessing change and contribute to project/programme evaluation, but they do not constitute a full evaluation.

Examples

Agricultural Livelihoods, East Africa

A 5-year agricultural resilience programme in Kenya and Uganda developed a baseline design that explicitly linked each indicator to a critical project decision. The team identified five key decisions: which crops to promote for market linkage, what training types farmers need, what market access barriers exist, which farmer groups are most receptive to new techniques, and how to allocate resources. For each decision, they collected only the data needed to inform it. When mid-term monitoring revealed land tenure was the binding constraint (not seed availability as expected), the programme redirected resources accordingly. The baseline functioned as a diagnostic tool, not just a design artefact.

WASH, South Asia

A water and sanitation programme in Bangladesh designed baseline and endline surveys to be identical in methodology, tools, and sampling approach. This enabled valid comparison of health outcomes between time points. The evaluation found that behaviour change communication was responsible for 60% of health improvements, while infrastructure contributed 40%. Without the comparable baseline-endline design, this finding would have been invisible.

Protection, West Africa

A protection programme in Sierra Leone initially planned a comprehensive baseline with 80 indicators. Using a decision-first approach, the team mapped each indicator to a project decision and found that only 25 indicators directly informed critical choices. They collected the remaining 55 indicators as "nice-to-know" data through secondary sources where possible, reducing survey time from 90 minutes to 45 minutes and freeing resources for deeper analysis of essential data.

Compared To

Baseline design is one of several approaches to establishing initial conditions. The key differences:

FeatureBaseline DesignRapid Needs AssessmentSurvey Design
Primary purposeMeasure initial conditions for change assessmentImmediate emergency response needs identificationGeneral survey methodology
Timeframe4-8 weeksDays to 2 weeksVaries by scope
DepthComprehensive, decision-linkedLimited, prioritizedVaries by design
SamplingRepresentative, statistically rigorousOften purposive or convenienceDepends on objectives
Best forMeasuring program impactEmergency responseGeneral data collection
ComparisonDesigned for baseline-endline comparisonNot designed for comparisonDepends on design

Relevant Indicators

18 indicators across 4 major donor frameworks (USAID, CRS, FCDO, ECHO) relate to baseline design and use:

  • Baseline documentation: "Proportion of indicators with documented baseline values collected before project start" (USAID)
  • Decision linkage: "Percentage of baseline data collection linked to specific project decisions" (FCDO)
  • Baseline timing: "Baseline assessment completed within 6 months of project start for projects 18+ months" (CRS)
  • IPTT documentation: "Baseline values documented in Indicator Performance Tracking Table (IPTT)" (USAID)

Related Tools

  • Baseline Report Template, Structured template for documenting baseline methodology, findings, and recommendations
  • Survey Design Tool, Interactive tool for designing comparable baseline and endline surveys

Related Topics

  • Indicator Selection, Selecting the right indicators for your baseline
  • SMART Indicators, Ensuring baseline indicators are measurable and specific
  • Sampling Methods, Choosing appropriate sampling for your baseline
  • Survey Design, Designing effective data collection tools
  • Data Collection Burden, Minimizing respondent and enumerator burden
  • Target Setting, Setting targets based on baseline results
  • Monitoring vs Evaluation, Understanding where baseline fits in the M&E system

Further Reading

  • Baseline and Endline Studies: A Practical Guide, BetterEvaluation. Comprehensive guide to designing and implementing baseline and endline studies.
  • USAID Performance Monitoring and Evaluation Policy, USAID. Donor requirements for baseline documentation and reporting.
  • CRS MEAL Policies and Procedures, CRS. Organizational standards for baseline assessment timing and documentation.
  • The Baseline Handbook, UNDP. Practical guidance for practitioners on baseline design and implementation.

At a Glance

Establishes initial conditions data that directly informs project decisions and enables valid comparison with endline measurements.

Best For

  • Setting realistic indicator targets based on actual starting conditions
  • Informing critical project adaptation decisions during implementation
  • Measuring change over time through baseline-endline comparison
  • Meeting donor requirements for initial conditions documentation

Complexity

Medium

Timeframe

4-8 weeks for design and implementation

Linked Indicators

18 indicators across 4 donor frameworks

USAIDCRSFCDOECHO

Related Topics

Core Concept
Indicator Selection & Development
The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.
Core Concept
SMART Indicators
A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.
Core Concept
Sampling Methods
Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.
Core Concept
Survey Design
The process of designing structured questionnaires and survey protocols to collect reliable, valid, and actionable data from a defined population.
Core Concept
Data Collection Burden
The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.
Core Concept
Target Setting
The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.
Term
Monitoring vs Evaluation
Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.