Steven S. Coughlin, Dept. of Veterans Affairs; Pablo Aliaga, Health Research and Analysis; Shannon Barth, Stephanie Eber, Jessica Maillard, Clare M. Mahan, Han K. Kang, and Aaron Schneiderman, Dept. of Veterans Affairs; Samar DeBakey, Health Research and Analysis; Patricia Vanderwolf, Abt SRBI; Meredith Williams, HMS Technologies
There is a continuing need to conduct surveys of U.S. veterans in order to examine important health questions. In surveys of veterans and members of society in general, achieving high response rates has become increasingly challenging. Despite the importance of response rates to the scientific validity of study findings, few studies have examined ways to maximize participation rates in veteran surveys conducted via mail questionnaire or web-based approaches. Prior studies of incentives have often involved public opinion surveys or market research rather than the types of health surveys that are vital to monitoring the health of veteran populations. Incentives increased the response rate while shifting the distribution of military and personal characteristics compared to the sample distribution.
The sizeable literature on survey research indicates that people respond to surveys for a variety of reasons, including perceptions about the sponsor of the survey, the importance of the topic, survey length, reciprocity, and altruism (Groves, 1992). Studies in non-veteran populations have shown that providing monetary incentives to survey respondents is positively associated with response rates (Thompson, 1985; James and Bolstein, 1990). Enclosing a monetary incentive with a request for survey participation may help to build trust with the potential survey participant (Dillman, 2007). Studies have shown that unconditional (pre-paid) incentives are more effective in increasing response rates than those that are promised in return for survey participation (i.e., conditional incentives) and that monetary incentives increase response rates more than gifts or lotteries (Goyder, 1994; Hopkins and Gullickson, 1992; Jobber et al., 2004). In addition, monetary incentives are likely to have a larger effect in studies with low response rates than in other studies (Jackle and Lynn, 2008).
Incentive effects may differ in veteran and non-veteran populations because of a variety of factors. Veterans may be reluctant to participate in a government survey because of concerns about divulging private information about sensitive topics. Anecdotal information suggests that some veterans may be reluctant to participate in health surveys because they don’t wish to have their government benefits or security clearances adversely affected. A further issue is that veterans are often invited to participate in health surveys and some veterans may be experiencing “survey fatigue.” On the other hand, veterans may wish to participate in a survey on veteran health topics because they believe the topic is important or they are altruistic.
We conducted a pilot study from April through July 2009 to examine the effectiveness of a $5 financial incentive in increasing participation rates as part of a national survey of recent veterans who had been deployed in support of Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF). The pilot study was conducted at the request of the Office of Management and Budget (OMB) as part of the National Health Study of a New Generation of United States Veterans.
Probability samples of 1,500 veterans deployed to OEF/OIF and 1,500 veterans who served during the same era but had not been deployed to OEF/OIF were selected for this pilot study. The samples were drawn using records provided by the Department of Defense Manpower Data Center (DMDC) and the Department of Veterans Affairs/Department of Defense Identity Repository (VADIR) database. The samples were stratified by deployment status and gender and randomized to incentive groups. The three incentive groups included the group that received no incentive; the promised group that received a $5 check after completion of the survey; and the pre-paid group that received a $5 incentive check with the first mailing. The pre-paid monetary incentive offered in this study was unconditional in that receipt of the incentive did not depend upon survey completion. There were 1,000 veterans in each incentive group (500 deployed and 500 non-deployed). Only veterans born before 1986 were sampled. The age distributions of the deployed and non-deployed veterans were made more comparable by restricting the sample to veterans born before 1986. Women veterans were oversampled to comprise 20% of potential participants.
Mailing addresses were obtained from Department of Veterans Affairs (VA) records, from the Internal Revenue Service Taxpayer Address Retrieval System, and from a search of commercial credit bureau databases. Veterans were sent a packet which contained the 16-page paper questionnaire, an introductory letter signed by a senior VA official, an informed consent form which explained the purpose of the study and informed the veteran that his or her participation was voluntary and confidential, and a pre-addressed, postage-paid stamped return envelope. The packet also included instructions for completing the questionnaire online, if preferred. The instructions contained a personalized web access code for security and privacy purposes. Mailings were conducted in three waves, with reminder/thank you postcards sent after each questionnaire mailing, following a modified Dillman method (Dillman, 2007). The second wave mailing took place two weeks after the first wave mailing. The third wave mailing took place four weeks after the second mailing in an effort to boost response rate. Reminder/thank you postcards were sent one week after each mailing.
Returned questionnaires were classified as submitted, accepted, or completed. A submitted questionnaire was defined as any returned paper questionnaire or questionnaire data submitted online. An accepted questionnaire refers to all unique questionnaires submitted. If a respondent submitted multiple questionnaires, the more complete questionnaire was selected for inclusion. A completed questionnaire refers to an accepted questionnaire in which at least 80% of the questions were answered. A partially completed questionnaire was an accepted questionnaire with answers for 50-80% of the questions.
Variables used in the analyses. Available demographic characteristics included gender, age group in years as of 2008 (24, 25-34, 35-44, 45-54, 55-64, or 65+), and race (white, black, Hispanic, other, unknown), while service-related variables included deployment to OEF/OIF, unit component (active duty, reserve, National Guard), and branch of service (Air Force, Army, Marines, Navy). Information was also available about mode of survey (mail, web), incentive status, and region of residence (midwest, northwest, south, west, and other/unknown). Other/unknown region was defined as Puerto Rico, Guam, and any missing state values.
Statistical analyses. SAS statistical package (SAS Institute, Inc., 2004) was used in the analyses. The contact rate was calculated by summing the number of respondents with completed or partially completed questionnaires, incomplete questionnaires, refusals and deceased divided by the whole sample (n=3,000). In order to calculate response rates, we excluded those who were never reached (n=561) from the denominator, because they were not exposed to the intervention (the monetary incentive). Response rates were then calculated using all respondents with completed or partially completed questionnaires divided by 2,439. Percentage response rates were examined by incentive group and also according to selected demographic and service-related variables. For the multivariate analyses, logistic regression was performed to examine incentive status while controlling for all other variables included in the model, among the reachable sample of veterans. Predicted marginals (adjusted percentages) were also estimated to allow for comparisons across categories of the incentive status variables included in the models (Korn, 1999).
Among the 3,000 sampled veterans, we received 651 questionnaires that were submitted either by mail or on the web. Of the 651 submitted questionnaires, 7 were duplicates. One respondent submitted both a paper and a web questionnaire. Six respondents completed more than 1 mail questionnaire, bringing the number of accepted questionnaires to 644. About 77% (n=497) were mail questionnaires and 23% (n=147) were submitted via the web. Among the 644 accepted questionnaires, 640 were complete, 2 were partially complete and 2 were incomplete and consequently classified as refusals. We also received 137 refusals and learned of 2 deaths, bringing the number of contacted veterans to 783.
The observed contact rate was only a quarter of the sampled veterans. A sizeable proportion of the veterans sampled for the pilot study likely received one or more of the mailings but never submitted a completed questionnaire. About a third of non-respondents had every available address marked return to sender by the U.S. Postal Service.
Almost 26% of respondents with accepted questionnaires were in the no incentive group, 34% were in the promised incentive group, and 39% were in the pre-paid incentive group. Excluding those who were never reached (n=561), the response rate was 26.3% (642 questionnaires completed); the rates by incentive type were: 21.1%, 27.9% and 29.8% for the no incentive, promised incentive, and prepaid incentive groups, respectively (Table 1).
Characteristics of veterans who completed or partially completed a questionnaire according to incentive status and as compared to the overall sample distribution (n = 3,000) are shown in Table 2. No important differences were observed by gender. Whites were disproportionately over-represented in the promised and pre-paid incentive groups as compared to the overall sample, suggesting the possibility that the monetary incentive may have introduced a response bias by race. Veterans with less education were under-represented in each of the incentive groups as compared to the overall sample and this was particularly true of the promised and pre-paid incentive groups. Irrespective of incentive status, younger veterans were under-represented among respondents as compared with the overall sample. Reservists and those who had been deployed to OEF/OIF had higher participation rates than active duty and those who had not been deployed to OEF/OIF. In contrast to veterans who lived in other regions of the U.S., the incentive did not increase the response rate among those who resided in the South.
In the multivariate analyses, 561 unreachable veterans whose mail had been returned to sender were excluded from the analysis. In addition, 30 survey participants were omitted from the multivariate analysis due to missing values for a response or explanatory variable, leaving a sample of 2,409 individuals available for multivariate analyses. As shown in Table 3, promised and pre-paid incentive status were both positively associated with survey response. For individuals in the pre-paid incentive group, the odds of the individual completing and returning the questionnaire increased by 52% compared to those receiving no incentive after controlling for other variables in the model [adjusted odds ratio (aOR) = 1.52, 95% confidence interval (CI) = 1.20 to1.92]. If an individual received a promised incentive in the mail packet, the odds of the individual completing and returning the questionnaire increased by 34% compared to an individual receiving no incentive [aOR= 1.34, 95% CI = 1.05 to1.71].
The results of this randomized trial indicate that a $5 pre-paid incentive was highly effective in increasing response rates within the group of 3,000 veterans. If a veteran received a pre-paid incentive in the mail packet, the estimated odds of the individual completing and returning the questionnaire increased by about 52% as compared to an individual not receiving an incentive. If a veteran was promised an incentive following return of the survey, the estimated odds of the individual completing and returning the questionnaire increased by 34% as compared to an individual not receiving an incentive. Although the incentive did not increase the response rate among veterans who resided in the South, this may be due to chance or to regional differences in the percentage of urban residences.
Across all 3 incentive groups, participants were more likely to be older and to have a higher level of education as compared with the overall sample. A similar pattern was observed in an earlier survey of veterans who served in the first Gulf War (Kang et al., 2000). However, the use of the promised and pre-paid monetary incentives accentuated differences by education and race, potentially increasing the non-response bias.
In the published literature, there is conflicting evidence about the effects of incentives on data quality (Jackle and Lynn, 2008). Although some authors have expressed concern that the use of incentives could increase the motivation of less diligent respondents who might otherwise not respond, this is unlikely to have been the case in the present survey because of the modest size of the incentive and the small number of partial completes. Moreover, some prior studies among non-veteran populations found that incentives lead to improved respondent effort and less item non-response (Jackle and Lynn, 2008).
Monetary incentives are just one of several aspects of survey design that can be used to boost response rates. Offering respondents alternative ways to complete the survey (for example, the option of completing a self-administered mail questionnaire or web-based survey) is also likely to increase participation rates. Other considerations include the effect on the reliability of estimates obtained from the survey data and the potential for differential responses across subgroups of the population. It is important to note that the groups that were offered the prepaid or promised incentives showed a potential bias towards higher educated veterans and white veterans.
With respect to limitations, we did not examine whether cash incentives work better than non-cash incentives. A further limitation is that only $5 incentives were provided and we did not attempt to examine the effect of varying the monetary incentive amount. Studies conducted in non-veteran populations indicate that increasing the size of monetary incentives can lead to a point of diminishing return in terms of the beneficial effect on response rates (Armstrong, 1975).
The results from this study underscore the challenges of achieving a high response rate in health surveys of veterans who, like other Americans, face many demands on their time. This study demonstrates the value of offering a modest monetary incentive to increase responses.
Coughlin, Steven S., Pablo Aliaga, Shannon Barth, Stephanie Eber, Jessica Maillard, Clare M. Mahan, Han K. Kang, Aaron Schneiderman, Samar DeBakey, Patricia Vanderwolf, and Meredith Williams. 2011. “The Effectiveness of a Monetary Incentive on Response Rates in a Survey of Recent U.S. Veterans” Survey Practice, February: www.surveypractice.org
Armstrong JS. Monetary incentives in mail surveys. Public Opinion Quarterly 1975;39:111-6.
Dillman DA. Mail and Internet Surveys, 2nd edition. Hoboken, NJ: John Wiley & Sons, Inc., 2007.
Goyder J. An experiment with cash incentives on a personal interview survey. J Market Res Society 1994;36:360-6.
Groves RM, Cialdini RB, Couper MP. Understanding the decision to participate in a survey. Public Opinion Quarterly 1992;56:475-95.
Hopkins KD, Gullickson AR. Response rates in survey research: a meta-analysis of the effects of monetary gratuities. J Exp Educ 1992;61:52-62.
Jackle A, Lynn P. Respondent incentives in a multi-mode panel survey: cumulative effects on nonresponse and bias. Survey Methodology 2008;34:105-17.
Jobber D, Sauders J, Mitchell VW. Prepaid monetary incentive effects on mail survey response. J Business Research 2004;57:347-50.
Kang HK, Li B, Mahan CM, et al. Health of US Veterans of 1991 Gulf War: a follow-up survey in 10 years. JOEM 2009;51:401-10.
Korn EL, Graubard BL. Analysis of Health Surveys. Wiley Series in Probability and Statistics. New York: Wiley; 1999:126-39.
SAS Institute. SAS/STAT 9.1 user’s guide. Cary (NC): SAS Institute; 2004.
Thomspon W. Utility of paying respondents: evidence from the Residential Energy Consumption Surveys. Paper presented at the annual conference of the American Association for Public Opinion Research, May 1985.
Trussell N, Lavrakas PJ. The influence of incremental increases in token cash incentives on mail survey response. Public Opinion Quarterly 2004;68:349-67.