Standards and Guidelines for Success Rate

This section details the standards and guidelines related to:

Calculation of "Success Rate"

In the July 2007 Vue, the MRIA Response Rate Committee published an article discussing the concept of response rate as it applies to web-based surveys (What does response rate really mean in a web-based survey?). They conclude the article with proposed calculation formulas for "success rate." They also state that the "MRIA Response Rate Committee is recommending that the term response rate should not be used in reporting data collection outcomes for most web surveys."

In this article, the MRIA also invited comments on the proposed success rate terminology and calculations, implying it is possible that the terminology and formulas may evolve further in the future. In this context, the Panel recommended a standard that (a) essentially says to follow the recommendations of the MRIA Response Rate Committee current at the time a survey is done, and (b) requires a record of contact dispositions that is compatible with MRIA calculation recommendations.

Standard: Calculation of "Success Rate"

  • Calculations pertaining to success rate or level must be calculated and described as recommended by the MRIA. The results must be included in the survey report. The survey report must also show a record of contact dispositions that includes the categories required to comply with the MRIA calculation formulas.

Non-response Bias Analyses

Probability Surveys and Attempted Census Surveys

The Telephone report contained a standard and guidelines with respect to non-response bias analyses which apply to probability surveys and attempted census surveys. These also apply to such surveys when conducted online. For reference, the standard and guidelines are:

Standard for Probability Surveys and Attempted Census Surveys

  • All survey reports must contain a discussion of the potential for non-response bias for the survey as a whole and for key survey variables. Where the potential of a non-response bias exists, efforts should be made to quantify the bias, if possible, within the existing project budget. If this is not possible, the likelihood and nature of any potential non-response bias must be discussed.

Guidelines for Probability Surveys and Attempted Census Surveys

  • The non-response analyses conducted as part of routine survey analysis would be limited to using data collected as part of the normal conduct of the survey, and could include techniques such as comparison of the sample composition to the sample frame, comparison to external sources, comparison of "early" versus "late" responders, or observations made during data collection on the characteristics of nonresponders. The non-response analyses conducted for a particular survey would be tailored to the characteristics of that survey.
  • Consider contracting additional data collection or analyses when the non-response analyses conducted as part of routine survey analysis suggest there may be value in getting additional information.

Non-probability Surveys

In the case of non-probability surveys, the applicability of non-response bias analysis is not always as clear as it is in the case of probability and attempted census surveys. With a non-probability sample, the results technically cannot be statistically projected to a population because it is not known, via selection probabilities, how the sample relates to the population of interest. In this context, since one does not have a well-founded estimate of a population parameter in the first place, it can be hard to then apply the concept of non-response bias analysis in estimating the population parameter.

To put this another way, statistically speaking one could say that the results of a survey based on a non-probability sample describe only that particular sample of people. It does not matter if there are any nonresponders during the process of generating the sample, because the nonresponders by definition are not part of the sample. Nonresponders only matter when one is trying to project the results to a population larger than the obtained sample itself.

Nonetheless, at a practical level, results from non-probability surveys are sometimes projected to target populations. So, at least in practical terms, the nonresponders in such non-probability surveys can matter in terms of assessing the quality of the survey estimates.

The potential value of some sort of non-response (NR) bias analysis may vary by type of non-probability survey or type of non-probability sampling:

Examples of situations where NR bias analyses might be useful

  • Access panel survey where the outgoing sample is carefully constructed to try to obtain a final sample "representative" of a particular population

    For such a survey, the size and composition of the outgoing sample would presumably be based on normative success rates obtained in that particular access panel, both overall and perhaps by sub-group (e.g., perhaps success rates are typically lower for young adults, so the outgoing sample for that sub-group would be disproportionately large). For purposes of NR bias analysis, what could be of interest would be any marked departures from the normative success rates typical of that panel. For example, if the typical success rate is 50% for young male panellists but the actual success rate for a particular survey is 30%, it might be of interest to explore why this lower success rate occurred and whether it might suggest a potential for bias in any of the key survey variables.

  • List-based survey based on a carefully constructed judgment sample

    Suppose a non-probability sample is drawn based on a carefully constructed list in which an attempt was made to represent various segments of a population (albeit not in a probability sampling sense). Then, it could be prudent to do an analysis of non-response to see to what extent the predetermined segments of interest are represented in the final sample.

Examples of situations where NR bias analysis may not apply or be useful

  • River-sampling

    Note: River-sampling is a method for recruiting individuals who are willing to participate in a single survey but not necessarily join an access panel. The recruiting is done in real time from a network of websites with which a research company has developed a referral relationship and for which the company pays a per recruit fee to the referring sites.

    In river-sampling, the concept of "nonresponders" does not seem to apply.

  • Propensity scoring

    Note: Propensity scoring is a weighting procedure for adjusting an online non-probability sample to attempt to look more like a telephone probability sample (or some other sort of probability sample) A propensity scoring model is based on conducting parallel online and telephone surveys, and associating select respondent characteristics with a "propensity" to have participated in the online survey versus the telephone survey. Data in subsequent online surveys are weighted so that the propensity score distributions are similar to that found in the reference telephone probability sample used in developing the model.

    Non-response bias analysis does not apply when propensity scoring is used. Or, to put this another way, what propensity scoring would do (if it works perfectly) would be to introduce into the survey results the non-response biases that were in the reference telephone survey used to calibrate the propensity scoring system. So, if any non-response bias analysis were to be done, it would be done on the reference telephone survey, not on the online survey itself.

In this context, the Panel recommended a standard and guidelines for non-response bias analyses for non-probability surveys.

Standard for Non-probability Surveys

  • The proposal for a non-probability survey must state whether or not any non-response bias analyses are planned. If not, a rationale must be stated. If analyses are planned, the general nature of the analyses should be described.

Guidelines for Non-probability Surveys

  • In the case of surveys based on non-probability samples, consider including in the survey report a discussion of the potential for non-response bias for the survey as a whole and for key survey variables when both of the following conditions are met:
    • The survey results are described as being representative of a population larger than the actual survey sample itself
    • Something is known about the characteristics of nonresponders

    Where the potential of a non-response bias exists, efforts should be made to quantify the bias, if possible, within the existing project budget. If this is not possible, the likelihood and nature of any potential non-response bias must be discussed in the final report.

  • The non-response analyses would be limited to using data collected as part of the normal conduct of the survey. The non-response analyses conducted for a particular survey would be tailored to the characteristics of that survey.
  • Consider contracting additional data collection or analyses when the non-response analyses suggest there may be value in getting additional information.

Qualified Break-offs

A qualified break-off is a respondent who qualifies for the survey (e.g., passes upfront screening) but then at some point during the interview refuses to participate further. Because of the self-administered nature of online surveys, the qualified break-off rate can potentially be fairly high, whereas it is expected that qualified break-off rates in telephone surveys tend to be quite low because of the human interaction.

For example, in an experimental study of the "qualified break-off" phenomenon, Hogg & Miller (Quirk's, July/August 2003, Watch out for dropouts) observed qualified break-off rates (dropout rates, in their terminology) ranging from 6% to 37%.

They also observed:

Obtaining different survey findings because respondents dropped out of surveys can be seen as an extension of the common problem of non-response error. Non-response error occurs when those who do not respond to survey invitations might be different in important ways from those who do. Similarly, those who do not complete surveys can be different in important ways from those who do finish.

The Panel agreed to adopt the following standard which would apply to all online surveys, including both probability and non-probability surveys:

Standard for Qualified Break-offs

  • If technically feasible, the data records for qualified break-offs should be retained in order to permit comparisons of qualified break-offs with respondents who completed the survey - which is a form of non-response bias analysis. Where there is a sufficient sample size of qualified break-offs, the potential for non-response bias should be explored by performing the comparisons, and the conclusions included in the survey report. Where the potential of a non-response bias exists, efforts should be made to quantify the bias, if possible, within the existing project budget. If this is not possible, the likelihood and nature of any potential non-response bias must be discussed.

Response/Success Rate Targets

The starting point for the Panel was the Telephone report, which contained standards and guidelines with respect to response rate targets. The Panel generally agreed with a modified version of the standards.

Notably, while the Panel agreed in principle with the advisability of setting numeric success rate target guidelines for online surveys based on probability samples (as had been done for telephone surveys), most members of the Panel felt that experience with online surveys is too limited at the present time to be able to set guidelines for numeric targets with confidence, and as well it was felt the targets might vary by type of online methodology (e.g., access panel versus website visitor intercept).

Several Panelists suggested PORD should publicize response/success rates for online surveys as projects accumulate, in order to (a) help POR Coordinators establish reasonable targets for particular projects, and (b) contribute to perhaps setting guidelines for numeric targets at some point in the future.

The Panel recommended the following standards:

Standards for Response/Success rate Targets

  1. Online or multi-mode surveys based on probability or attempted census samples must be designed to achieve the highest practical rates of response/success (over the various modes), commensurate with the importance of survey uses, time constraints, respondent burden and data collection costs.
  2. Prior to finalization of the research design and cost for any particular online or multi-mode survey, a target response/success rate or response/success rate range (over the various modes) must be agreed upon by the government department/agency and the research firm. The research will be designed and costed accordingly.

Document "The Advisory Panel on Online Public Opinion Survey Quality - Final Report June 4, 2008" Navigation