Statistically Significant Sample Sizes
There are no magic numbers for sample size. There is no such thing as a statistically significant sample.
Unfortunately, those two words—statistically significant—are bandied about with such abandon that they are quickly losing their meaning. Even people who should know better (the data wonks at Google Surveys should know better, right?) are saying ridiculous things as they promise to help you “generate statistically significant results.”
Here is a useful passage from the Reference Manual on Scientific Evidence, compiled by the Federal Judicial Center and the National Research Council:
Many a sample has been praised for its statistical significance or blamed for its lack thereof. Technically, this makes little sense. Statistical significance is about the difference between observations and expectations. Significance therefore applies to statistics computed from the sample, but not to the sample itself, and certainly not to the size of the sample. … Samples can be representative or unrepresentative. They can be chosen well or badly They can be large enough to give reliable results or too small to bother with. But samples cannot be “statistically significant,” if this technical phrase is to be used as statisticians use it.
Which is not to say that sample size doesn’t matter. It matters a lot, because larger samples (if chosen carefully) allow you to calculate estimates with more precision. But you can calculate estimates, specify the precision of estimates, and conduct tests of statistical significance, on any sample size at all.
So what sample size do you need for acceptable levels of precision? For most research, we recommend sample sizes ranging from 100 and 1,200 depending on your objectives and the audience you are trying to reach. Take a look at our Interactive Graph for Choosing Sample Size which provides some guidelines. It also offers a fascinating look at the diminishing returns of very large sample sizes.