Skip navigation

Introduction to Special Issue of Survey Practice on Item Nonresponse

Don A. Dillman, Washington State University

Considerable interest exists in the joint use of Web and mail questionnaires to collect sample survey data. This mixed-mode interest stems from two important considerations. First, nearly one-third of all U.S. households either do not have Internet access or use it infrequently (less than once a week), making it unlikely that Internet surveys will be completed by representative samples of all households (Pew Research Center, 2010).  Second, address-based sampling (ABS), which appears to be our most adequate household sample frame (Iannacchione, 2011), makes it possible to use mail contacts to request Web survey responses from those who are able and willing to respond in that way.  For those who cannot or will not respond over the Internet, mail questionnaires provide an alternative means of responding that is likely to improve the demographic representativeness of respondents (Messer and Dillman, 2011).

Previous research has suggested that one of the shortcomings of mail questionnaires is that they produce higher item nonresponse rates than either telephone or face-to-face interviewing (de Leeuw, 1992; de Leeuw et al., 2003). Research on item nonresponse rate differences between Web and mail surveys has produced mixed results: some studies have reported lower rates for Web surveys (Bech and Kristensen, 2009; Boyer et al., 2002; Denscombe, 2006; Kiesler and Sproull, 1986; Kwak and Radler, 2002), while one article found similar rates (Wolfe et al., 2009), and two others found higher rates for Web surveys (Brečko and Carstens, 2007; Manfreda and Vehovar, 2002). The variation in results suggests a need for additional research to clarify past findings. If mail surveys consistently achieve substantively higher item nonresponse rates than Web surveys, this could pose a potential problem to the pairing of Web and mail modes in a mixed-mode design.

Reasons exist for expecting that modern Internet survey methods using faster Web connections and more advanced construction capabilities will achieve lower item nonresponse than mail surveys. These design procedures include the use of individual page construction, automatic branching from screen questions and better control of the navigational path through the questionnaire (Kwak and Radler, 2002). In theory, item nonresponse to Web questionnaires can be completely eliminated by requiring an answer to every item. However, that procedure may not be acceptable due to Institutional Review Board (IRB) requirements that all individual answers to survey questions be “voluntary” and the concern that requiring answers to every item may lower overall unit response to questionnaires from  early terminations.

The four papers assembled for this special issue of Survey Practice were all presented in a thematic session at the 2011 AAPOR Conference.  Each of these papers addresses the question of whether the quality of questionnaire responses differs across modes, and how combining mail and Web modes in data collection affects item nonresponse.  All of the papers included here provide explicit comparisons of item nonresponse for mail and Web questionnaires using Web programming that did not require a response to each question, except when branching was required to determine the next appropriate question.

The first analysis by Messer, Edwards and Dillman examines item nonresponse for results from three surveys of state and regional address-based samples of households.  The large number of respondents to each survey mode within three experiments makes it possible to examine the effects of demographic and questionnaire characteristics by mode.

The second analysis by Lesser, Newton and Yang also reports item nonresponse differences for Web and mail questionnaire respondents in general public surveys.  The authors use an annual survey on quite similar topics over three years, and include a telephone mode for two of those years.  This allows for comparisons between telephone vs. mail-only and Web+mail designs, which were being considered as data collection alternatives.  

The third analysis by Israel and Lamm is a quasi-general public client survey of clients of the Florida Cooperative Extension Service, which provides nonformal education to all interested persons. They test item nonresponse for groups that provided e-mail contact information, which was then utilized to obtain higher proportions of Web vs. paper responses.  They also provide insight into how item nonresponse varies for different question structures across multiple years.

The fourth paper by Millar and Dillman provides a Web and mail comparison of item nonresponse for university undergraduate students. Because of the availability of both postal and e-mail addresses, it was possible to assign students randomly to either Web or mail treatment groups.  This eliminated choice of response mode as a contributor to Web vs. mail item nonresponse rates.

Results of these analyses are strikingly consistent. Overall paper questionnaires sent to the general public generate slightly higher item nonresponse than do the Web surveys. Differences by question type vary considerably, but questions eliciting higher item nonresponse in one mode tend to do so in the other modes as well. In contrast, the student survey exhibited no significant overall differences in item nonresponse across modes, but, as happened in the general public surveys, there were variations by question type.

Together these studies suggest that while the differences in item nonresponse between Web and mail should not be ignored in the design of mixed-mode surveys, these differences are sufficiently small that they do not constitute a major barrier to attempting to combine mail and Web data collection in the same mixed-mode survey.

Suggested Citation

Dillman, Don A. 2012. “Introduction to Special Issue of Survey Practice on Item Nonresponse” Survey Practice, April:


Bech, Mickael and Morten Bo Kristensen. “Differential response rates in postal and Web-based surveys among older respondents.”  Survey Research Methods 3.1(2009): 1-6.

Boyer, Kenneth K., John R. Olson, Roger J. Calantone, and Eric C. Jackson. “Print versus electronic surveys: a comparison of two data collection methodologies.”  Journal of Operations Management 20(2002): 357-373.

Brečko, Barbara Neza and Ralph Carstens.  “Online Data Collection in SITES 2006: Paper survey versus Web survey – Do they provide comparable results?” Proceedings of the IEA International Research Conference (IRC 2006). Washington, D.C.: 261-269.

de Leeuw, Edith D. Data Quality in Mail, Telephone, and Face-to-face Surveys. Amsterdam: TT-Publicaties, 1992.

de Leeuw, Edith D., Joop Hox, and Mark Huisman. “Prevention and treatment of item nonresponse.” Journal of Official Statistics 19.2(2003): 153-76.

Denscombe, Martyn. “Web-based questionnaires and the mode effect: An evaluation based on completion rates and data contents of near identical questionnaires delivered in different modes.”  Social Science Computer Review 24(2006): 246-254.

Iannacchione, Vincent G. “The changing role of Address-Based Sampling in survey research.” Public Opinion Quarterly 75.3(2011): 556-575.

Kiesler, Sara and Lee S. Sproull.  “Response effects in the electronic survey.”  Public Opinion Quarterly 50.3(1986): 402-413.

Kwak, Nojin and Barry Radler.  “A comparison between mail and Web surveys: Response pattern, respondent profile, and data quality.”  Journal of Official Statistics 18.2(2002): 257-273.

Manfreda, Katja Lozar and Vasja Vehovar.“Do Web and mail surveys provide the same results?” Development in Social Science Methodology 18(2002):149-169.

Messer, Benjamin L. and Don A. Dillman. “Surveying the general public over the Internet using Addressed-Based Sampling and mail contact procedures.” Public Opinion Quarterly 75.3(2011): 429-457.

Pew Research Center: Data Tabulations, Social Side of the Internet. 28 November 2011.

Wolfe, Edward W., Patrick D. Converse, Osaro Airen, and Nancy Bodenhorn.  “Unit and item nonresponses and ancillary information in Web- and paper-based questionnaires administered to school counselors.”  Measurement and Evaluation in Counseling and Development 21.2(2009): 92-103.

Post a Comment

Required fields are marked *

%d bloggers like this: