December 3, 2025

Jerry Arbittier

It’s tempting to fall for the seductive logic of big numbers in research. More respondents mean more data points, which should mean more reliable findings,

Right? We have all been there, falling for this trap,

It’s this kind of thinking that makes perfect sense until you realize you’ve spent three weeks analyzing responses from people who barely understood your questions, didn’t fit your target audience, or clicked through your survey while watching Netflix.

What does the research say?

The objective of quantitative research is to reduce estimation error and produce generalizable findings. Research demonstrates that statistical power and precision depend not just on sample size, but critically on sample quality and representativeness.

Random sampling represents the gold standard for minimizing bias and ensuring methodological rigor in quantitative research. The fundamental objective is to reduce estimation error, enabling researchers to draw reliable, generalizable inferences about a broader population from a smaller subset. But that starts before anyone is sampled: you have to clearly define who’s in the population in the first place. Random sampling isn’t just “picking people at random.” It’s “pick people at random from a well-defined group that meets explicit inclusion criteria.” If the population frame is fuzzy, the randomness doesn’t mean much.

The accuracy of a random sample isn’t driven solely by how many people you collect, it’s driven by how well those respondents truly reflect the population you’ve defined.

When Does More Data Not Mean Better Estimates?

There is a point of diminishing returns with larger samples. Beyond adequate statistical power, more data does not necessarily lead to better estimates, it simply leads to more expensive research with marginal improvements in precision.

Standard statistical formulas show us where this inflection point occurs. For instance, increasing a sample from 400 to 1,600 (a 4x increase in cost and effort) only halves the margin of error. The practical significance of this improvement often doesn’t justify the investment.

Most robust quantitative studies achieve their objectives with sample sizes determined by:

  • Required confidence level (typically 95%)
  • Acceptable margin of error (often ±3-5%)
  • Population variance on key measures

This tells us something crucial about quality: specification of research objectives and understanding of population characteristics matter more than arbitrary scale.

What Makes a Quality Quantitative Sample?

When conducting healthcare research, B2B technology studies, or consumer experience investigations, the quality of our respondents hinges on several factors:

Population Alignment: Respondents must genuinely belong to the target population you’re trying to understand. A study of enterprise software buyers requires actual decision-makers, not just IT staff who might be tangentially involved.

Response Quality Indicators: The best samples demonstrate thoughtful engagement: reasonable completion times, consistent response patterns across validation checks, and meaningful open-ended responses when included.

Minimal Selection Bias: Quality samples account for and minimize systematic exclusion of particular population segments through careful panel construction, recruitment strategies, and response rate analysis.

The Challenges

Stakeholder trust matters, a well-chosen sample builds credibility, but too small a sample might lead people to doubt the findings. This is where many researchers face pressure to inflate sample sizes beyond what’s methodologically necessary.

How to solve this?

The key is transparency. When stakeholders understand that 12-15 well-curated interviews can reach saturation and provide robust insights, they’re often more comfortable with smaller, higher-quality samples than with large but shallow ones.

What Perfect Curation Looks Like

At AOPS, we believe perfection is achieved when every respondent in your research is there for a specific reason, contributes something distinct, and helps build a complete picture of the truth you’re seeking.

The research community is slowly shifting toward more nuanced thinking about sample size. Four different approaches are now described for sample size estimation in qualitative studies: rules of thumb, conceptual models, the concept of saturation, and statistics-based methods.

Each approach has value, but they all converge on a central truth. The next time you’re designing research, resist the reflexive reach for large sample sizes. Instead, ask: Who are the specific people whose experiences and perspectives would genuinely teach us something we don’t already know? Then do the harder work of finding those people, crafting screening that identifies real qualifications, and creating research experiences that enable depth rather than breadth.

Because in research, as in so many things, perfection doesn’t come from having the most data, it comes from having the right voices, asking the right questions, and creating space for the insights that only emerge when you prioritize quality over quantity.

• • • • •

Want more market research best practices information?

 Contact us at jerry.arbittier@aops.us or 917-327-0533.
Copyright © 2026 AOPS