Higher response rates result in greater sample sizes and reduce bias. Research on ways to increase response rates for mail and Internet surveys suggests that the following steps will improve the odds that participants will complete and return your survey, whether it is by Internet or mail.

Make the survey as salient as possible to potential respondents.
Relevance can be tested with a small group of people similar to your respondents.

If possible, use Likert-type questions, versus open-ended questions, to increase response rates. 
Generally, the shorter the survey appears to respondents, the better.

Limit the number of questions of a sensitive nature, when possible.
Additionally, if possible, make the survey anonymous, as opposed to confidential.

Include prenotification and follow-ups to survey respondents.
Personalizing these contacts will also increase response rates. In addition, surveys conducted by noncommercial institutions (e.g., colleges) obtain higher response rates than those conducted by commercial institutions.

Provide additional copies of or links to the survey.
This can be done as part of follow-up with potential respondents.

Provide incentives. 
Incentives included in the initial mailing produce higher return rates than those contingent upon survey return, with twice the increase when monetary (versus nonmonetary) incentives are included up-front.

Consider these additional strategies for mail surveys:
Sending surveys using recorded delivery, using colored paper for mail surveys, and providing addressed, stamped return envelopes.

Consider the following when conducting an Internet survey:
A visual indicator of how much of the survey respondents have completed—or, alternately, how much of the survey they have left to complete.

Although there are no hard-and-fast rules for what constitutes an appropriate response rate, many government agencies require response rates of 80 percent or higher before they are willing to report results. If you have conducted a survey and still have a low response rate, it is important to make additional efforts or use a different survey mode to reach non-respondents; however, it is important, to ensure that they do not respond differently than initial respondents and that the survey mode itself did not produce bias.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in spring 2010.

About the Authors

Amy Germuth

Amy Germuth box with arrow

Founder and President, EvalWorks, LLC

Dr. Amy A. Germuth is the founder and president of EvalWorks, LLC, an evaluation firm located in Durham, NC. She has a B.S. in mathematics, a M.S. in education administration, and a Ph.D. in education psychology, measurement, and evaluation. Dr. Germuth has been an evaluator for over 15 years and focuses on evaluating STEM initiatives. She has served as the external evaluator on multiple NSF-funded grants, including those funded via Noyce, GK12, DRK12, ISE, ITEST, MSP, ATE, and TUES grants, as well as NASA grants and NIH SEPA grants. Dr. Germuth is an active member of the American Evaluation Association.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.