When participants fail to return a survey or fill it out completely, the results can affect the size and characteristic of the sample, which in turn, can compromise the external validity of your dissertation. However, there are strategies you can use to overcome non-response bias.
Non-response bias is “the kind of bias that occurs when some subjects choose not to respond to particular questions and when the non-responders are different in some way (they are a non-random group) from those who do respond” (Vogt, 2005, p. 210). Not only do subjects often fail to “respond to a particular question,” but perhaps more detrimental to the sample, they may fail to respond at all. The former type of non-response is called "item non-response" and the latter is termed "unit non-response" (Berg, 2002, p. 1).
Non-response bias is particularly problematic for two reasons. First, it can create bias in the sample. If the subjects who do not answer specific questions or fail to return the survey have certain characteristics—for example, if all non-respondents are male— this can affect the randomness of the sample. If the sample is biased and no longer random, then it lacks the potential to be representative of the larger population from which the sample was drawn, thereby limiting the dissertation's or study's external validity.
Second, samples need to be a certain size. If a sample is too small in proportion to the population or as required by the type of statistical test, the researcher will not have enough information from which to make a statistical inference about the population—which is the point of taking a sample in the first place. For example, imagine a scenario in which the number of subjects a researcher needs is 200. The researcher, failing to anticipate non-response bias, sends out exactly 200 surveys. What happens when 50, 60, or 70 people fail to respond to the survey? Well, you do the math.
What might a researcher or dissertation student do to counteract non-response bias in the scenario above? One obvious strategy would be to create room for error. Instead of using a minimum bound on the sample size—in the above case, 200—the researcher would use some number greater than 200, preferably a number that would be equal to or preferably greater than the anticipated number of non-respondents. Because the upper bound on the sample size is the population itself, the researcher can sample as many subjects as cost and time allow.
Another strategy is to increase the size of the non-respondent proportion of the sample. If the demographic characteristics salient to the non-respondent population are known—through a pilot study, for instance—then it is possible to increase the size of the random sample from the population strata defined by those characteristics (Barclay, Todd, Finlay, Grande, & Wyatt, 2002). These strategies, however, only addresses the problem of size not item non-response bias.
Other strategies to overcome potential error take into account why survey recipients fail to respond. People may fail to return surveys for any number of reasons. They may have forgotten about it, or they may be so busy they don’t want to take the time to fill one out. Others view surveys as intrusive on their personal lives. Perhaps the survey is too long, or they misplaced it if it is a paper survey. Further, the ubiquity of commercial surveys, some of which masquerade as research surveys, have made many skeptical (Lydeard, 1996). In general, people are more apt to respond to a survey in which they perceive there to be a value beyond a commercial interest. The more important or relevant the survey is, the more motivation participants will have to respond (Goyder, 1982).
Other issues are less psychological and more mechanical. If the survey is done online, the respondents may not know how to log onto the survey site or even know how to locate the site that hosts the survey. In the case of a paper survey, the instructions may be ambiguous, the questions poorly worded, or the survey too long. The survey package may contain a poorly worded cover letter or lack self-addressed return envelopes.
Given the above reasons for non-response, the following measures can help counteract non-response bias:
Take the time to write a straightforward, effective cover letter. State your reasons for conducting the survey. Emphasize the educational or beneficial nature of the research. Personalize it by letting participants know it is for your dissertation or thesis. Give the potential respondent a reason to spend a few minutes of their time filling out your survey.
Provide clear instructions on how to fill out the survey.
If you don’t get a survey back, politely remind the participant via e-mail, over the phone, or in person. All three types of reminders are effective.
Provide a reward or incentive. A drawing or a gift card is an excellent way to entice people to fill out the survey. Indicate you will donate a dollar for every survey filled out to a reputable charity (perhaps do both).
Emphasize the confidentiality of the material and when and how the results will be used, stored and destroyed.
If possible, reduce the size of the survey. The shorter the survey, the greater chance the participant will fill it out.
In general, put yourself in the respondent’s shoes. Ask yourself, “What would make me want to fill out a survey?” Finally, in cases where you cannot overcome non-response bias through practical methods, there are statistical procedures including imputation of missing values, adding additional weight to the surveys completed by those who share the same characteristics as those in the non-respondent pool, and using a the maximum-likelihood approach. Nathan Berg (2002) discusses these procedures in his article on non-response bias.
Barclay, S., Todd, C. Finlay, I., Grande, G., & Wyatt, P. (2002). Not another questionnaire! Maximizing the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs. Family Practice, 19 (1), 105-111.
Berg, N. (2002). Non-response bias. University of Texas at Dallas. Retrieved from https://www.utdallas.edu/~nberg/Berg_ARTICLES/BergNon-ResponseBiasMay2002.pdf
Goyder, J. (1982). Further evidence on factors affecting response rates to mailed questionnaires. American Sociological Review, 47,550-553.p style="width: 781px" Lydeard, S. (2006). Commentary: Avoid surveys masquerading as research. British Medical Journal, 313, 733-734.