Choosing the most appropriate data collection method is central to attaining a good response rate. The merits of each method must be considered within the context of the target population, the survey objectives, the type of information to be collected, the research budget and the time constraints.
Because this document focuses on the telephone as a survey method, the question here will be limited to the following: When is a telephone survey the most appropriate data collection method? Here are a few guidelines to consider.
While many factors will influence the choice of an appropriate data collection method, choosing the most suitable method will increase the likelihood of achieving a higher response rate.
After selecting the data collection method, consider strategies for contacting "hard-to-reach" respondents. Depending on the target audience and subject of the survey, some respondents may be much harder to contact than other segments of the population. These people include members of low-incidence populations–those defined by quite narrow demographic (or other) specifications. Instead of relying solely on the telephone, consider using a mixed-mode approach to contact or obtain data from hard-to-reach respondents. In fact, survey organizations are increasingly using mixed-mode survey designs to maximize response rates.
A mixed-mode approach increases the likelihood of contacting hard-to-reach respondents and can offer them response methods they might find more convenient than the telephone. Use of a mixed-mode approach assumes that alternate contact information is available for the target segment of the population. A mixed-mode approach may increase the cost of data collection and the length of the data collection period. However, it can also shorten the time required to conduct the fieldwork and can reduce the costs of achieving the target number of completes (such as the costs needed to make numerous callbacks or refusal conversions to complete interviews with hard-to-reach respondents). Impact on cost and timing aside, a mixed-mode approach does tend to yield higher response rates for studies.
The impact on survey accuracy of using a mixed-mode approach must also be weighed against the potential bias of not hearing from these respondents. For instance, the use of different data collection methods can result in data that are not entirely comparable, depending on the types of questions asked.
Consider a question with a long list of responses. In a telephone survey, the interviewer reads the list to respondents and can rotate the possible answers to account for primacy/recency effects–that is, the tendency of respondents to pick the first or last response presented. However, it is not as easy to vary the order of answers when using a paper-based, self-administered questionnaire. Multiple versions of the questionnaire with randomized ordering are needed.
As another example, in a telephone survey, interviewers may ask respondents an open-ended question and use a pre-coded list of answers (which are not read to the respondent) and the "other/specify" option to record responses. This approach facilitates coding and data comparability. However, this type of question does not work at all in online surveys. Replacing it with a truly open-ended question is not a good option because of the high non-response rate for open-ended questions in online surveys.
In short, a mixed-mode approach may introduce a new variable that must be considered during the analysis: whether people responded differently to self-administered questions than to interviewer-administered ones. Consider a mixed-mode approach when the potential for non-response error outweighs concerns related to measurement error.
Data are collected from one person who acts as a proxy for another individual or the entire household.
There is a general consensus in the research literature that proxy respondents should not be used when the research is designed to measure attitudes, opinions or knowledge. Current evidence suggests that data from proxy respondents sometimes differ systematically from data obtained from respondents (Groves et al., 2004). Nevertheless, under the right circumstances, proxy respondents can increase response rates by enabling survey organizations to reach respondents who otherwise would not be able to take part in the survey. For some studies, using proxy respondents is better than obtaining no response at all.
Develop a clear set of criteria to determine which sorts of studies are suitable for the use of proxy respondents. Proxy respondents can be viable for surveys that collect factual or experience-based information. If the information being collected is not opinion-based, it is reasonable to assume that people other than the intended respondent could answer, as long as they possess the needed information. Proxy respondents may also be useful when the respondent speaks neither official language or has a relevant disability, such as a hearing impairment. Interviewers should clearly identify proxy interviews in the data set to ensure that tests can be run during the analysis to look for variations between proxy and non-proxy interviews.
Surveying the general public in July and August, when Canadians typically take vacations, will generally result in lower response rates due to these absences. Likewise, avoid surveying accountants during tax season, or public servants during the March 31 fiscal year-end period.
Ideally, data should be collected at the most appropriate time of year to achieve the highest response rate possible. Avoid surveying the target population during times of the year when members are hard to reach or less willing to participate in research. Such times will depend on the specific audience, but try to avoid interviewing during major holidays, audience-specific events, three-day weekends and vacation seasons. After a specific number of callbacks–attempts to re-contact people who were not available when first called–these telephone numbers will be retired and new telephone numbers attempted to achieve the required number of completed surveys. Retiring valid numbers–for example, those where the interviewer got a busy signal, no answer or an answering machine–and adding new contacts will decrease the response rate.
If one cannot avoid collecting data during these times, build a longer field period into the project timelines. Unless a longer interviewing window is scheduled, the response rate is likely to be lower and the sample of respondents might be biased (if survey respondents differ systematically from non-respondents). To make sure the sample is representative of the target population, the interviewing invariably will take longer to complete. That is the trade-off for conducting POR telephone surveys at less appropriate times of the year.
The length of the data collection period can have a direct impact on response rates. It will depend on the sample size, interview length and interviewing supplier capacity. Such factors aside, the field period should be sufficient to achieve a good response rate. A general rule is that the longer a study remains in field, the higher the response rate (although there is a point when the return on invested time and budget will diminish).5 Telephone surveys with short data collection periods tend to suffer from lower response rates because the telephone numbers may not receive as many callbacks before being retired, or the callbacks are not as varied in terms of time of day or day of the week. As well, a person refusing one day may be in a different situation or frame of mind a few weeks later, and more amenable to being interviewed.
The length of the interviewing period is an essential factor in maximizing response rates to telephone surveys (Halpenny and Ambrose, 2006).6 A longer field time increases the chances of reaching a respondent and improves the chances of finding that respondent in a situation conducive to taking part in the survey.7 The following table provides an approximate indication of the range of response rates that can be expected from a general public RDD telephone survey, depending on the length of the field period.
|Response rate||Field time*|
|7% to 15%||2 to 6 days|
|20% to 35%||1 to 4 weeks|
|35% to 60%||6 to 12+ weeks|
* Times assume sufficient field resources are available, such as budget, computer-assisted telephone interviewing stations and interviewers.
The time allotted for data collection should also reflect incidence level, target audience and research objectives. All things being equal, a survey of a low-incidence population or one of hard-to-reach elected officials will require more time to complete than a survey of the general population.
In addition to these considerations, the type of information being collected can influence the field time required. Should it be necessary to capture a reflection of the target population's attitudes or behaviours at a specific moment in time, a longer field period might compromise these objectives. An example of this type of study is a "recall" survey following an event such as an advertising campaign. If the organization does not take measures to mitigate the effects of the time lapse, prolonged data collection may not yield accurate data. As time passes, the likelihood of respondents recalling the advertisement decreases. Other types of POR studies where this might apply include mailout recalls, assessments of recent service interactions and time-use studies, such as diary studies where respondents must record an activity or behaviour at a specific point in time).
The response rate is one indicator of survey quality. Sampling and non-sampling errors can also affect the quality of a survey. No research design is perfect, but efforts should be made to minimize sources of error, independent of the response rate.
Topic interest plays a role in achieving high response rates. Generally, the more interesting the topic, the more likely people will be to respond (Groves, 2004).
In survey research, the population or universe refers to the target audience or the group of people of interest–for instance, the general public, private sector executives or seniors. The population to be included in the survey must be relevant to the research objectives. Properly defining the population will determine who should be included in the sample and who should not. This step is essential to conducting good quality research and has an indirect impact on response rates. The more important that potential respondents perceive the research to be, and the more relevant it is to them, the more likely they are to respond and take part in the survey. When the target population has no direct link to the survey topic, an effective introduction is critical. Consider how best to frame the research as relevant to these potential respondents (see BP 1.3.2).
Select a sample size that relates to the target population, the research budget, the intended data analyses and the required degree of accuracy. Sample size does not have an impact on response rates. Rather, it affects the accuracy of the results. The larger the sample size, the smaller the margin of error and the more reliable the results. Choosing the right sample size will help minimize unnecessary sampling error. It will not help to increase the response rate per se.
To determine the appropriate sample, consider the following factors.
The sample frame is like a map that determines who is eligible to participate in the survey–for example, members of the general public, Ontario teachers or users of a certain government program. It is important to put in place a sample frame that effectively corresponds to the population of interest. After doing so, develop a sample list that includes all elements of the research population and constitutes the source from which survey respondents will be drawn. Coverage error occurs when this list does not include all segments of the target population. Consider a telephone survey of the general public. RDD samples generally include only landlines, not cell telephone lines. In Canada, approximately 94% of households have landlines, 4.8% have cell phones only, and 1.2% do not have any telephone. As a result, an RDD survey of Canadians will have minimal, but still some, coverage error. Minimizing coverage error will increase the likelihood that the information collected accurately reflects the target population.
A high response rate to a survey based on a flawed or incomplete sample frame may not produce valid data. Consider a telephone survey of the general population that results in a high response rate but uses local telephone directories as its sample frame. Given that approximately 10 to 20% of the population has an unlisted or newly listed telephone number, not everyone has an equal chance of being contacted for the survey. As a result, the survey data may not reflect the opinions or attitudes of the segment of the population with unlisted telephone numbers. If this segment of the population differs demographically or attitudinally from people with listed telephone numbers, the survey findings may not be valid.
Common sampling methods for telephone surveys include RDD, sample lists purchased from list brokers and in-house lists (such as lists of clients, members or employees). Regardless of the sampling method used, survey organizations should consider the following.
The increased use of cell telephones among some segments of the Canadian population presents a growing problem. As more households rely only on cell telephones, telephone coverage error may increase. In December 2005, Statistics Canada reported that 4.8% of Canadian households have only a cell telephone (as compared to 1.9% in 1993). That number rises to 7.1% in B.C. and to 7.7% among low-income households (Statistics Canada, 2005).
Research undertaken in the United States has found that cell-only Americans differ from those with a landline (Purcell, 2006; Pew, 2006; Tuckel et al., 2006). Cell-only users tend to be younger (18 to 29 years) and are more likely to be single, lower income and renters (rather than homeowners). Currently, evidence suggests that the cell-only phenomenon has not undermined national polls (Pew, 2006). Nevertheless, as the proportion of cell-only households increases, it may become prudent to augment RDD samples with cell samples to provide a more representative final sample (Purcell et al., 2006)–for example, one that includes young people, who tend to be underrepresented in RDD surveys.
Research suppliers typically handle sampling issues. Research clients might consider asking their suppliers the following questions.
The response rate is one indicator of survey quality. Sampling and non-sampling errors can also affect the quality of a survey. No research design is perfect, but efforts should be made to minimize sources of error, independent of the response rate.
The longer the survey, the less likely people are to take part or to complete the full interview.
Response burden is an unavoidable part of survey research, but efforts to limit it can help maximize response rates. Shorter questionnaires can improve response rates, particularly if interviewers inform respondents that the interview will be short.8 In practical terms, surveys of 10 minutes or less are considered relatively short and not overly burdensome. Surveys of 15 minutes are common in federal government POR and do not tend to place an undue burden on respondents. Telephone surveys of 20 minutes or more are less common and can be expected to result in lower response rates, other factors such as survey topic and target audience being equal. Unless necessary, avoid interviews longer than 15 minutes to help to maximize response rates. Keeping the length of the questionnaire to a minimum, while still achieving the research objectives, will help yield higher response rates for studies.
Before designing a questionnaire, it is a good idea to review what is already known about the target population in relation to the study objectives. Assess current information needs, determine whether some of this information is available elsewhere, and prioritize issues and questions to make it easier to manage questionnaire length. This review will help to ensure that departments and agencies collect essential information only. The result will be a focused questionnaire. Not only will a shorter questionnaire increase the response rates of individual studies; limiting response burden will also help to cultivate more favourable perceptions of survey research generally and may increase the likelihood of Canadians agreeing to an interview when contacted for future surveys.
A well-structured questionnaire ensures that the data collected satisfy the objectives of the research and minimizes the burden placed on respondents. A good questionnaire collects only the information that is essential to the survey objectives. Consider the following guidelines when developing the questionnaire for a study.
Closely review the translation of the questionnaire. The language must be as clear and simple as that in the original document. Pay particular attention to the accuracy and appropriateness of the translation. The "correct" translation of a text might not always reflect the popular vocabulary of the target audience. Efforts to produce a well-designed questionnaire in one language will be undermined if the translation is not subject to the same level of scrutiny.
A well-designed survey will reduce the response burden, which can improve the response rate.
Pre-testing the questionnaire is an excellent way to work out any potential problems with the research instrument before the fieldwork.9 A pre-test will help determine the length of the survey and ensure that the questionnaire is measuring what it is designed to measure, that respondents understand the questions and can provide the information requested, and that interviewers understand the questionnaire and the computer-assisted telephone interviewing (CATI) programming. In short, it is an important step in the development of the research instrument.
Consider the following when pre-testing the questionnaire for a study.
As an additional quality control measure following the pre-test, but before the survey gets well into field, it can be helpful to have the top-line frequencies run after 50 to 100 completed surveys. Reviewing the frequencies can help determine whether people are being routed through the questions they should be asked. This approach not only helps ensure that respondents are asked the questions they should be asked, thus minimizing potential frustration; it is also an excellent check on data quality, conducted at a time when adjustments to the questionnaire are still possible.
Reaching a potential respondent is just the first step in the interview process. Once a potential respondent has answered the telephone, he or she needs to agree to take part in the survey. Incorporating strategies designed to encourage participation is critical to achieving high response rates.
Advance notification is more common and practical in special-audience research than in telephone surveys conducted with the general public.
Using an advance information letter can help enlist survey participation and improve response rates (de Leeuw et al, 2006;11 Link and Mokdad, 2005). An advance letter explains the background of the study, encourages participation and legitimizes the research. It can be used to position the research as a consultative exercise, particularly among special audiences such as stakeholder groups; respondents may be more interested in participating in a consultation than in simply doing a survey. Likewise, giving clients advance notice of a survey shows respect for clients, which can improve the response rate.
List-based sample frames–such as client, employee or other types of stakeholder lists– provide the best opportunity to send advance notification. Advance notification is less feasible in RDD and most other telephone surveys of the general public, when an accurate list of respondents is not available to the researcher. In some instances, only a subset of potential respondents for whom mailing addresses are available will receive a letter before the telephone call. While the overall response rate may benefit, the subset who received the advance letter may be over-represented in the final sample, which may introduce bias into the survey data.
If the research design incorporates advance notification of the target population, the federal department or agency should send the letter on official stationery, where possible, rather than having the research firm send it on their letterhead. This approach can increase the credibility and perceived importance of the research. A good letter should be kept short–no longer than one page as a general rule–and include the following elements.
When it is not possible to use advanced letters, consider the following strategies, only on a case-by-case basis.
Survey research needs to distinguish itself from telephone marketing and solicitation calls. Professional survey introductions can help accomplish this goal.
Effective introductions are necessary to increase the likelihood that the person will take part in the research. Since most telephone refusals occur before the interviewer has an opportunity to request an interview, an effective introduction should be short and appeal directly to people. Studies have found that the majority of refusals occur during the first minute of the call (Groves, 1990). While survey introductions need to convey a number of points, they should try to do so in the most efficient manner. In the introduction, the interviewer should do the following:
Informing potential respondents of the topic of the survey may increase response rates in some instances–for example, when it is an interesting topic or one especially relevant to the respondent. People cooperate at higher rates on surveys of interest to them, and this tendency is most evident when the topic is stated in the introduction. However, revealing the topic may compromise the research objectives. For example, in policy studies, an organization may want to hear from all aspects of the general public, not only those most interested in the specific policy area. Identifying the topic can sometimes lead to unwanted self-selection, where certain types of respondents opt into a survey and others opt out. If unsure about the impact on the research objectives of identifying the topic, use topic-neutral language in the survey introduction. For example, the interviewer could say he or she is "calling to discuss current issues of interest to Canadians."
Your participation in the survey is completely voluntary and will not affect any dealings you may have with the Government of Canada. Your privacy is protected by law.
Assurances of confidentiality may allay concerns that potential respondents might have about survey participation. All surveys conducted by the federal government must contain privacy language, although the specific language included in survey introductions varies from department to department. While the specifics vary, privacy and confidentiality language should be appropriate to the survey, its objectives and its target population.
In addition, departments and agencies should not request personal information from respondents that is not relevant or essential to the survey. If this information is essential to the survey analysis, interviewers should explain to respondents why the information is important and how the data will be used (when necessary). Personal information might include any demographic information not absolutely required for analytical purposes, such as details about racial background, religion or sexual orientation that the respondent might view as sensitive.
Incentives are logistically difficult to use in telephone surveys.
There is a general consensus among survey researchers that monetary and non-monetary incentives are an effective way to increase the response rate of a study (Fahimi et al, 2006). Incentives are particularly useful in surveys where the response burden is high–that is, where the respondent has to make an exceptional effort. Where possible, offer the incentive to respondents when first contacting them to take part in survey research (Church, 1993). Compared to no incentives at all, incentives provided after the survey is completed do not significantly improve response rates (Singer et al., 2000).
Incentives have some drawbacks. As well as increasing costs, they may increase the public's expectation of payment, induce perceptions of inequity (if, for example, they are used only to convert refusals), and affect sample composition or responses to specific questions, with those receiving incentives potentially answering more positively. These concerns, especially those related to optics, are only amplified in the context of conducting research for the Government of Canada.
Despite these weaknesses, incentives may be appropriate for some Government of Canada telephone surveys. Incentives may be useful if a survey has one or more of the following aspects.
Distributing a research summary to special-audience respondents is a valuable and relatively common type of non-monetary incentive. Individuals taking part in such research tend to be stakeholders or other professionals who can benefit from, and attribute value to, the findings. Stakeholders are often interested in the outcome of the study to which they contributed, while other professionals see value in the competitive intelligence afforded them by a summary of the findings. Use of this form of incentive is appropriate and effective within the federal government context to increase response rates.
Other common incentives include monetary awards, gift certificates and entries in prize draws. Some literature suggests that the amount of a monetary incentive is less important to respondents than the fact that they receive an incentive–in other words, the incentive need only be symbolic.
Identifying the sponsor of a survey can increase favourable opinion of the sponsor. Attitudinal results from sponsor-identified surveys should not be compared to surveys where the sponsor is not identified, such as general omnibus surveys.
Revealing the sponsor of a survey can increase response rates, depending on the legitimacy and public perceptions of the organization.13 Research suggests that government-sponsored or -conducted surveys achieve higher response rates than those of most other organizations (Heberlein and Baumgartner, 1978; Groves and Couper, 1998).14 As such, identifying the Government of Canada, or a department or agency, as the sponsor of the survey can increase the response rate. Except for surveys such as awareness studies, when disclosing the sponsor would compromise the survey objectives, the practice should be to reveal sponsorship. The Government of Canada should be emphasized as the study sponsor; the name of the contractor conducting the research on behalf of the government should only be provided after the government is identified. If the department or agency is not well known, the survey introduction should identify the Government of Canada as the study sponsor, either with or without the department or agency name. For example, the interviewer could say, "XYZ Canada, an agency of the Government of Canada, is sponsoring this study."
Government of Canada telephone surveys should offer potential respondents the name and telephone number of a validation source for the study, if they ask for it. This source should be a contact at the sponsoring department or agency–typically, the POR buyer or end client who commissioned the research. The level of the individual is far less important than his or her knowledge of the POR study, including why it is being conducted, how the research firm obtained individuals' contact information and how the government will use the data collected through the survey. Either one bilingual contact person or one person fluent in each official language is required. In addition, all surveys should be registered with the Marketing Research and Intelligence Association's (MRIA's) Survey Registration System, so that potential respondents can call a toll-free MRIA number to determine that the survey is legitimate.
Another effective validation approach in some instances is to refer potential respondents to a relevant toll-free number in the government blue pages of their telephone directory, such as the telephone number for the Employment Insurance Program or the Canada Pension Plan. This approach is particularly useful when surveying seniors, because they are often the target of telephone scams and can be more cautious in dealing with unsolicited telephone calls. This approach may not be practical for all POR telephone surveys; however, it is worth considering, depending on the scope of the research and the target audience. A related approach is to offer potential respondents the telephone number of the media relations office of the sponsoring department or agency.
Related to the previous point, call centres and other relevant offices of the sponsoring department or agency should be informed of the survey in advance. Even if interviewers do not directly refer potential respondents to the call centre or other office to validate the survey, people may call anyway, asking about the research. This point is particularly relevant to client and stakeholder surveys. POR officials should notify relevant officials about the survey and provide them with information they can use to respond to enquiries. Consider developing a brief Q&A document for the media relations officer in the communications area of the department or agency undertaking the study.
2 For a good discussion of online sampling concerns, see Guha (2006). (Back to 2)
3 RDD can be used to help overcome the lack of complete telephone listings, but no equivalent to RDD is available for online surveys. (Back to 3)
4 Tavassoli and Fitzsimons (2006) found that people respond differently to the same question when typing an answer rather than saying it. Response modes that require written, not spoken, answers (such as online surveys) change the representation of attitudes and behaviours. The implication drawn from this study is that online surveys may not be useful in discerning changes in attitudes over time. (Back to 4)
5 Using data on response rates for 205 telephone surveys, McCarty et al. (2006) found that even a one-day increase in the length of the field period (per 100 cases) resulted in a 7% increase in the response rate. (Back to 5)
6 Findings of studies undertaken by Keeter et al. (2000) and Halpenny and Ambrose (2006) found that response rates for identical surveys improved substantially the longer the surveys remained in field. (Back to 6)
7 Gallagher et al. (2006) found that maintaining consistently high response rates over time in parallel RDD surveys required an increasing number of field hours and call attempts per completed interview. (Back to 7)
8 See, for example, McCarty et al. (2006) or Dillman et al. (1993). While the literature examining the impact of survey length on response rates is not conclusive (see Bogen, 1996), logic and practical experience suggest that longer questionnaires will result in lower response rates. (Back to 8)
9 The importance of conducting a pre-test is reflected in the Office of Management and Budget standards, which make pre-tests mandatory (unless the instrument has previously been used successfully in the field, such as in a tracking survey). (Back to 9)
10 If the research team is monitoring the pre-test interviews live at the field house, it is generally not possible for the team to hear all of these interviews, since they usually run concurrently. (Back to 10)
11 This recent meta-analysis of the impact of advance letters on response rates for telephone surveys concluded that pre-notification is an effective strategy. Average response rates went from 58% (no letter) to 66% (advance letter). (Back to 11)
12 See ZuWallack (2006) for a RDD household respondent selection method designed to increase response rates and reduce survey costs. (Back to 12)
13 Beebe (2006) found that familiarity with a survey sponsor increases the likelihood of participation. (Back to 13)
14 Harris-Kojetin and Tucker (1999) found that during times when public opinion of the government was favourable, cooperation rates on a major government survey were higher. (Back to 14)