Standards and Guidelines for: Response Rate

Overview

In the Panel's discussion of the various topics related to response rate, several broad themes emerged relating to the role and significance of response rate in telephone surveys.

  • Response rate is one of a number of factors that potentially relate to telephone survey data quality. In the design and evaluation of telephone surveys, it is important to consider not only response rate, but also each of the other factors related to data quality, such as questionnaire design, sample coverage, data collection quality controls and so forth.
  • Response rate is important as an indicator of potential risk to data quality in the form of non-response bias. If survey non-respondents differ systematically from respondents on key survey variables, then non-response bias exists. The magnitude of any non-response bias in the results will depend on both the size of the difference between nonrespondents and respondents on key survey variables, and on response rate.
  • The Panel emphasized, however, that response rate should not be interpreted as a direct measure of data quality, but rather only as an indicator of potential risk to data quality. If nonrespondents and respondents do not in fact differ on key survey variables, then the results will not be subject to non-response bias, regardless of response rate level. Research to date has not shown a clear relationship between response rate level and data validity in public opinion telephone survey research. Based on a review of research, the Best Practices: Improving Respondent Cooperation for Telephone Surveys report prepared for PORD noted, "These studies suggest that higher response rates do not necessarily produce more accurate data, and that surveys with low response rates can still provide useful and valid data."

Consensus was reached by the Panel on standards and guidelines on the following aspects of response rate:

  • Reporting of response rate: How response rate is to be calculated and reported
  • Non-response bias analyses: When non-response bias analyses should be conducted
  • Response rate targets: The role of response rate level in planning telephone surveys, both in general and in the context of specific projects
  • Monitoring response rate during fieldwork
  • Considerations related to whether or not to attempt refusal conversions

Note that the Panel made recommendations with respect to both unit and item response rates. The recommendation pertaining to item response appears in the section on non-response bias analyses. All other recommendations pertain to unit response.

Reporting of Response Rate

STANDARD

There was a consensus among the Panel in support of the following standard:

Response rate must be calculated using the MRIA Data Collection Response Rate Calculation Empirical Method, and response rate together with the associated record of call disposition must be included in the survey final report.

GUIDELINE

The MRIA has also adopted a modified version of the response rate calculation—the Estimation Method-that is recommended by Statistics Canada as a secondary method. The MRIA recommends that when the Estimation Method is reported, it be provided in addition to the Empirical Method response rate calculation.

The Panel agreed with the MRIA position, and noted that the Estimation Method can be useful in some circumstances. It was suggested that the decision whether to report the Estimation Method be made on a project-by-project basis. Accordingly, the following guideline was recommended:

Where the project authority and research firm judge it to be useful, the MRIA Estimation Method response rate can also be reported in addition to the Empirical Method response rate.

Non-response Bias Analyses

It was noted in the earlier Overview section that while response rate is not a direct indicator of data quality, it is important as an indicator of potential risk to data quality in the form of non-response bias. In this context, the following summarizes the Panel's recommendations with respect to non-response bias analysis.

  • Every telephone survey should include an analysis of the potential for non-response bias based on information collected during the normal conduct of the survey.
  • Several types of non-response analyses that could be conducted as part of a survey were identified, and it was suggested that the final determination of the analyses to be done be tailored to the characteristics of each particular survey.
  • When the non-response analyses suggest there would be value in getting further information about the potential for non-response bias, the project authority could consider contracting additional research such as a follow-up survey of nonresponders or some other special data collection or analyses.

The following is a fuller statement of the standards and guidelines suggested by the Panel related to analysis of the potential for non-response bias.

STANDARD

  • All survey reports must contain a discussion of the potential for non-response bias for the survey as a whole and for key survey variables. Where the potential of a non-response bias exists, efforts should be made to quantify the bias, if possible, within the existing project budget. If this is not possible, the likelihood and nature of any potential non-response bias must be discussed.

GUIDELINES

  1. The non-response analyses conducted as part of routine survey analysis would be limited to using data collected as part of the normal conduct of the survey, and could include techniques such as comparison of the sample composition to the sample frame, comparison to external sources, comparison of "early" versus "late" responders, or observations made during data collection on the characteristics of nonresponders. The non-response analyses conducted for a particular survey would be tailored to the characteristics of that survey.
  2. Consider contracting additional data collection or analyses when the non-response analyses conducted as part of routine survey analysis suggest there may be value in getting additional information.

The following are additional notes on the Panel's discussions with respect to the above standards and guidelines.

STANDARD #1

As part of its discussion of non-response analysis, the Panel considered the approach taken by the OMB, which essentially recommended that non-response bias analyses be conducted only if circumstances indicate a potential for bias to occur. For example, the OMB offers the following guideline: Plan for a non-response bias analysis if the expected unit response rate is below 80%. The Panel preferred that a non-response bias analysis always be conducted, for a few reasons.

  • In practical terms, GOC telephone surveys very rarely have a response rate of 80% or higher, so the U.S. federal government's OMB guideline means that a non-response bias analysis would be done for virtually all surveys.
  • More fundamentally, it was felt there should always be some assessment of whether non-response bias might have affected survey results, regardless of response rate. Response rate is only an indicator of potential risk of non-response bias. Even high response rates can be associated with non-response bias, so it was judged prudent to always conduct a non-response bias analysis rather than making the analysis contingent on an imperfect risk indicator such as response rate.

As outlined in the next section on response rate targets, the Panel felt the appropriate point at which to consider response rate in relationship to the potential for non-response bias is at the survey planning stage, where consideration of response rate can be an effective part of managing the risk of non-response bias.

The suggested standard encompasses non-response bias analyses at both the unit and the item levels, with the item analyses focused on "key survey variables." The Panel considered the OMB guideline for non-response bias analysis at the item level: Plan for a non-response bias analysis if the expected item response rate is below 70% for any items used in the report. The OMB item response rate criterion was considered to be too lenient, because it is relatively uncommon for survey variables in GOC telephone surveys have response rates below 70%. The Panel felt it prudent to always conduct a non-response bias analysis for key survey variables, because even high item response rates can potentially be associated with non-response bias.

The Panel's intent is that the non-response analyses done as part of every telephone survey be the types of relatively low-cost analyses that can be done using information normally collected during the conduct of a telephone survey. Examples of such analyses are shown in Guideline #1. The assumption is that professional time for these analyses would be incorporated in the research firm's proposal for the survey project, and therefore in the contract issued for the survey. It was suggested it would be useful for the project authority if the professional time allocated to non-response analyses be broken out in the proposal.

GUIDELINE #1

Some Panel members noted that non-response bias analysis is still an evolving field in terms of both techniques and empirical findings on relationships between response rate and risk of non-response bias in telephone public opinion surveys. In this context:

  • The analyses mentioned in the guideline are not meant to be an exhaustive list
  • It could be helpful for both research firms and people in the Government of Canada responsible for public opinion research to take a course in non-response bias analysis to learn more about both existing techniques and new techniques and empirical findings

GUIDELINE #2

The purpose of Guideline #2 is to recognize that there may be circumstances when a more intensive investigation of non-response bias is warranted, and which goes beyond the scope of what would be included in a typical contract for a telephone survey. An example would be a follow-up survey of nonresponders. The need for and potential value of any such "higher cost" procedures would be identified as part of the "lower cost" non-response analyses routinely conducted for each survey. If the latter does indicate a need for more intensive investigation, then there could be either a contract amendment or new contract issued to cover the cost.

Response Rate Targets

The issue considered here was what is required or desirable with respect to the response rates achieved in telephone surveys. Inasmuch as response rate is in indicator of potential risk of non-response bias, a response rate target can be useful when planning a survey as part of managing this risk. Setting a response rate target has concrete impact on various aspects of survey design and execution, such as number of callback attempts, time in the field, and so forth.

The Panel recommended the following standards and guideline.

STANDARDS

  1. The telephone survey must be designed to achieve the highest practical rates of response, commensurate with the importance of survey uses, time constraints, respondent burden and data collection costs.
  2. Prior to finalization of the research design and cost for a particular telephone survey, a target response rate or response rate range must be agreed upon by the government department/agency and the research firm, consistent with response rate target Standard #1. The research will be designed and costed accordingly.

GUIDELINE

Taking into consideration both the time available for fieldwork and the importance of the survey, consider using the following response rate target ranges:

  • 10% to 20%: surveys for which only a short time period (less than three weeks) is available to conduct the fieldwork
  • 20% to 40%: surveys of moderate to high importance that will be in the field for at least three weeks
  • 40% to 60%: surveys of high importance, e.g., in terms of key policy decisions or resource allocation decisions
  • 60% to 80%: Surveys with extraordinary response rate requirements and for which there are allowances for the time and budget required to achieve such high response rates

The following are notes on the Panel's discussions with respect to the above standards and guidelines.

STANDARD #1

The purpose of Standard #1 is to acknowledge the importance of response rate as an indicator of potential risk of non-response bias while at the same time requiring that response rate targets take into consideration the other survey characteristics listed, i.e., "the importance of survey uses, time constraints, respondent burden and data collection cost." A balanced perspective on response rate was considered important because a relatively lower response rate may not in fact be associated with non-response bias, and undue focus on response rate at the survey planning stage could lead to inadequate attention to other aspects of the survey that affect data quality.

GUIDELINE #1

The purpose here was to provide realistic and appropriate numeric response rate target ranges as guidelines for use in planning public opinion telephone surveys.

While many aspects of survey design can impact response rate, the suggestion was to focus on two dimensions in particular when setting response rate targets for public opinion telephone surveys.

  • Time in field: Some telephone surveys commissioned by the GOC have short time frames for the fieldwork. Examples include "overnight polls" on rapidly emerging issues, and omnibus surveys. These short time frames will result in lower response rates, and the suggested guideline was 10% to 20%. When for whatever reason the response rate target is required to be higher, there must also be ample time allowed to conduct the fieldwork. For example, for surveys of moderate to high importance the Panel suggested a response rate target range of 20% to 40%, but noted that this will probably require three weeks or more in the field.
  • Importance: For surveys on topics judged to be of high importance, for example in terms of impact on key policy decisions or on decisions about resource allocation in the country, a risk management perspective may suggest aiming for a relatively higher response rate. For such surveys, it was suggested that a realistic albeit still challenging response rate target range would be 40% to 60%. Beyond this, there may be extraordinary circumstances that require an even higher response rate target, e.g., a longitudinal survey where it is vital to recontact as many of the original respondents as possible. In such circumstances a response rate target range of 60% to 80% may be appropriate, but to achieve this it may also be necessary to allow much longer time in field and allocate substantially more financial resources.

The numeric ranges suggested by the Panel were based on their experience and judgment. However, they strongly recommended that the Government of Canada conduct research to examine this issue further.

  • It was strongly emphasized that any response rate numeric targets need to be grounded in actual experience conducting public opinion telephone surveys, in order to ensure that targets are practical and realistic. The Government of Canada's own experiences with telephone surveys would be an important, ongoing source of information about response rate.
  • The research should look at levels of response rate achieved and how response rate correlates with survey design characteristics. The research should also attempt to explore the extent to which non-response bias has affected public opinion survey results and how this might relate to response rate. It was noted the latter type of research would be facilitated if the GOC adopts the Panel recommendation that every survey include some analysis of the potential for non-response bias.
  • Past MRIA studies of response rate have shown changes in response rate over time. In this context, the Panel strongly recommended that numeric response rate targets be periodically reviewed to ensure they remain realistic and appropriate.

STANDARD #2

Standard #2 ensures there will be some dialogue between the project authority and the research firm around response rate. This will:

  • Ensure response rate is considered during the survey planning stage
  • Ensure both parties have a shared expectation for the response rate target for the survey
  • Enable the research firm to design, cost and conduct the survey accordingly

At minimum, the research firm should include response rate target assumptions in its proposal. Beyond this, the project authority may provide information in the Statement of Work such as a response rate target or response rate target range (e.g., using the numeric ranges in the proposed Guideline, or past experience), or information on relevant factors such as timing requirements or how the survey will be used and its importance.

With regard to the consequences of not meeting an agreed-upon response rate target or response rate target range, the Panel recommended the research firm should provide an analysis of why the target was not achieved. The Panel did not feel it appropriate to require the research firm to make additional attempts to achieve the target or to otherwise penalize the research firm, since there may be other factors outside the control of the research firm which impacted response rate or which were not known at the time the research was designed and costed.

Monitoring Response Rate During Data Collection

Overview

In the Best Practices: Improving Respondent Cooperation for Telephone Surveys report prepared by Phoenix Strategic Perspectives for PORD, there were a number of “best practices” noted for ways to improve response rates during data collection. The Panel was asked to consider whether the best practices related to the following should be adopted as standards or requirements by the Government of Canada:

  • Monitoring reasons for non-response during data collection
  • Monitoring level of non-response for different segments of the target population
  • Attempting refusal conversions

The Panel was also asked to comment on whether a brief questionnaire should be administered to people who refuse an interview or terminate the interview in order to get more detailed information on why they are refusing to participate in the survey.

Monitoring Non-response During Date Collection

There was consensus among Panel members that there should be a standard associated with monitoring reasons for non-response during data collection, and a guideline related to monitoring levels of nonreponse of different segments during data collection.

With regard to the first issue, monitoring reasons for non-response during data collection, the Panel agreed that this requirement needs to be separate from the general standard for monitoring of fieldwork in order to ensure that research firms clearly understand they are expected (a) to monitor the data collection process, and (b) to monitor the outcome of calls during data collection.

The Panel agreed to guidelines related to the types of circumstances where it may be possible or appropriate to monitor levels on non-response of different segments of the target population. Briefly, these are when one can (a) identify segments or variables to monitor, and (b) determine these are relevant to response rate.

For example, for a general population RDD sample:

  • There are only a small number of variables available to define segments (e.g., age cannot be determined unless the interview was terminated after collection of age information)
  • The available variables are related more to the sample frame itself (i.e., region, population density, etc.) rather than to the characteristics of the population that may drive non-response levels for a particular survey (e.g., attitudes held on the particular subject).

In comparison, if the sample source is a list, key variables or segments of the population may be more easily identified and tracked.

STANDARDS

  • Monitoring of call dispositions/reasons for non-response shall be carried out on an ongoing basis, throughout the entire field period.
  • This information will be provided to the project authority upon request.

GUIDELINES

  • When key segments in the target population can be identified in the sample frame, monitor level of non-response by segments during data collection.
  • When overall response rate is lower than expected, further analysis of the call disposition should be considered.

Attempting Refusal Conversions

There was agreement among the Panel that there may be some situations in which it can be appropriate to attempt to convert refusals into completed interviews in order to improve response rates. There was also general agreement that attempts at converting refusals should be done "cautiously" or only under "extraordinary circumstances."

Many of the views expressed by the Panel are detailed in Best Practices: Improving Respondent Cooperation for Telephone Surveys:

Refusal conversions are an important and essential aspect of data collection for survey organizations. This practice involves an attempt to convert someone who has already said that he/she does not want to take part in a survey (or who terminated an interview) into a respondent. Done during a subsequent telephone call, converting refusals relies on senior, experienced interviewers calling back people who initially refused to be interviewed to try to persuade them to participate.

Use of refusal conversions should be handled with care, but this technique is effective in terms of increasing telephone survey response rates. Not only can it turn a refusal into a complete, another call to a household after an initial refusal might reach someone other than the person who refused, and result in a completed interview. Depending on the department and agency sponsoring the PORD survey (and the study topic), there may be heightened sensitivity around the practice of refusal conversions.

The Government of Canada does not want to be, or seen to be, pressuring Canadians to participate in telephone surveys. In these cases, the use of refusal conversions should be weighed against the potential for non-response bias should certain segments of the target population not respond to requests for an interview.

There was consensus among the Panel to adopt the following guidelines:

GUIDELINES FOR ATTEMPTING REFUSAL CONVERSIONS

If it is decided to attempt refusal conversions:

  • Use only senior experienced interviewers to make conversion attempts
  • Monitor the process to ensure that respondents are not reacting negatively to the additional contact
  • Try to do enough conversions to allow a comparison of original respondents to converts on key study variable

Getting More Information from People on Why They are Refusing to Fully Participate in a Survey

Panel members were asked to comment on whether the following should be a requirement for all GOC telephone surveys:

There are different types of "refusals" on studies, occurring at different stages of the interviewing process. The current practice for recording these "refusals" is to classify them into one of three categories:

  • Household refusal: Before a respondent is selected
  • Respondent refusal: Before answering all qualified questions
  • Qualified respondent break-off: Any termination after qualifying for the survey
     

To better understand the specific nature of refusals at each stage of the interview, one approach that the Government of Canada could take is to develop a very short standard questionnaire (i.e., one or two questions) for interviewers to record more detail about why people are refusing to fully participate than is shown above (e.g., the reason why a respondent is not willing to participate).

The idea would be that by looking at this type of detailed information during the data collection phase changes could be made to the questionnaire (e.g., the introduction, the order of questions) or the interviewing process.

The Panel generally agreed that in the form presented here, i.e., respondents being asked additional questions about their refusal, there are more downsides for the GOC than value to be gained. The fundamental objection, particularly among the GOC representatives, is that this may be viewed as "bordering on harassment" on the part of government and would likely reflect negatively on the GOC. In addition:

  • Even if asking these additional questions is just perceived to be "irritating," it was thought this type of requirement may adversely affect future response rates to surveys
  • A point was also raised about the potential additional cost associated with requiring information of this type to be collected for all surveys when it may not always be warranted

It is also the case that some Panel members felt more detailed information about refusals could be useful but only if it did not involve needless questions to the public and/or increased the cost of conducting surveys. Directly pertinent to this point is how most research firms generally keep track of refusals, in the context of other information about the sample frame. As stated in the MRIA standards, members are required to maintain records of the disposition of the contact sample and these records must contain specific types of information. (Note: This information was also included in this report as one of the requirements for Survey Documentation.) The main categories stipulated by MRIA serve as input to generate the MRIA's Response Rate Calculation. However, record of contacts serve a number of other important roles in the execution of a survey, e.g., monitoring survey operations, evaluating the sample frame. Therefore, it is common practice in the industry to record the outcome of contact attempts in more detail than is required to generate response rate calculation, particularly as related to refusals.

An illustration of this is to compare the Response Rate Calculation category Qualified respondent break-off with the level of detail that some research firms will include on their record of contact about refusals, which are then aggregated into the overall category (i.e., Qualified respondent break-off) for the response rate calculation. Detailed records are kept about terminations including, (a) the stage or question number at which the respondent refused to continue, and (b) the reason for termination, either given by the respondent or in the interviewer's assessment of why the interview was terminated.

Arguably, this standard element of record keeping for any surveys provides types of information that are useful both during data collection (e.g., to make changes to the questionnaire or interviewing process) and after data collection, to help understand the nature of refusals for any survey.

Based on the foregoing, there was consensus among the Panel that:

  • Neither a standard nor guidelines is required at this time.
  • What may be more useful for the GOC as next steps would be to undertake a review of record of contacts for GOC surveys. The purpose of this review would be to identify what other categories could be added to a record of contact, if any, to better capture and characterize refusals on a survey but without either increasing interviewer burden or cost.

In the absence of a review of record of contacts, for the time being it was suggested it be left to the project authority and the research firm to customize the record of contacts for a survey, as appropriate.

Document "Advisory Panel On Telephone Public Opinion Survey Quality - Final Report February 1, 2007" Navigation