Standards and Guidelines with respect to Use of Multiple Panels in the Execution of a Survey

It is sometimes the case that a panel provider will supplement the sample from the primary panel with samples from other panels. These other panels might be specialty panels operated by the same panel company, or panels operated by other companies. This use of multiple panels can occur, for example, when particularly large sample sizes are required, and/or when the target population includes lower incidence groups.

The issue for the GC is ensuring data quality is maintained when other panels are used which have no direct contractual arrangement with the GC. For example, the GC might issue a Standing Offer contract to Panel Company A based on its panel management quality standards, but a survey conducted by Panel Company A might also include samples from Panel Companies B and C who are not on the Standing Offer.

One possible solution to the issues raised by use of multiple panels is simply to forbid the practice. However, the Advisory Panel did not recommend this, but rather acknowledged that in some circumstances it can be necessary and useful to use multiple panels in order to facilitate completion of a project.

The Panel agreed that:

  • The identity of other panels must be disclosed before they are used.
  • All panels used must conform to the Government of Canada standards with regard to access panel management, and it is the responsibility of the lead panel provider to ensure this if other panels are to be used. If secondary panels do not fully comply with GC standards, then areas of noncompliance must be disclosed and discussed prior to the use of the panels.

The Advisory Panel recommended the following standards and guideline.

Standards: Use of Multiple Panels

  • If it is determined that multiple access panels will be used for an online survey, it is still the responsibility of the lead panel provider to ensure that the standards and guidelines for Sampling Procedures and for Response/Success Rate are adhered to for the survey as a whole. This requires the lead panel provider to have sufficient information and control over the other panel samples to comply with these standards.
  • The identity of the other panels must be disclosed and agreed to prior to the use of these other panels.
  • Any other panels used must conform to MRIA requirements regarding use of unsolicited email.
  • Any other panels used must conform to the Government of Canada standards with respect to panel management. If a panel does not fully comply with these standards, the areas of noncompliance must be disclosed prior to use of the panel, and agreement obtained to use the panel.
  • The distribution of the initial and completed samples across the different panels must be described.
  • A description must be given of the steps taken to avoid duplication in the sample of individuals who are members of more than one panel.
  • Panel source must be part of the data record for each respondent.
  • Both prior to fieldwork, including in the research proposal, and in the survey report, there must be discussion of any data quality issues that might arise because of the use of multiple panels for a survey.

Guideline: Use of Multiple Panels

  • It is recommended that selected data gathered during survey administration be analyzed in order to identify the nature of any "panel effects" that might exist. Such data could include, for example, response/success rate, and results for selected survey variables.

Incentives/honoraria

The starting point for the Online Advisory Panel was the section in the Telephone report on Incentives/Honoraria. The Panel generally agreed with these existing guidelines, but there were also a few modifications recommended to:

  • Address the need for transparency about incentives at several stages in the research process. This included adding a guideline related to post-survey transparency that names of contest winners be posted, and that appropriate records be kept in the event an audit is requested.
  • Add a standard and reword the guidelines to reflect that incentives are often expected by respondents to online surveys, particularly for those surveys conducted by research companies.

Standard for Incentives/honoraria

  • The details of any incentives/honoraria to be used for an online survey must be provided in both the proposal and survey documentation, including:
    • The type of incentive/honoraria (e.g., monetary, non-monetary)
    • The nature of the incentive - e.g., prizes, points, donations, direct payments, etc.
    • The estimated dollar value of the incentives to be disbursed

Guidelines for Incentives/honoraria

  • Monetary incentives should be used only when there are strong reasons to believe they would substantially improve the response rate or the quality of responses. Note that in online surveys, particularly those conducted by research companies, respondents have often come to expect that an incentive will be offered.
  • The decision to use respondent incentives (monetary or non-monetary) or honoraria to gain respondent cooperation should carefully weigh the potential for bias in the study due to non-response against the potential that the use of incentives/honorariums can affect the sample composition (i.e., who agrees to participate in the survey) and/or the possibility that the response to some questions in the survey may be influenced.
  • The use of respondent incentives (monetary or non-monetary) or honoraria, and the magnitude of incentives or honoraria, may be considered as a strategy to improve response rates under one or a combination of these circumstances:
    • When using sample sources that have pre-existing commitments to potential respondents with regard to incentives/honoraria (e.g., commercial access panels)
    • At later stages of the field process rather than for all interviews; note, however, that great caution must be used in offering different incentive amounts to different respondents
    • When response burden is high or exceptional effort is required on the part of the respondent (e.g., when interviews exceed 20 minute in length, respondents are required to do some preparatory work for the online survey, the study is complex)
    • The target population is low incidence (e.g., 5% or less of the population) or the population size is very limited
    • The population is made up of hard-to-reach target groups (e.g., physicians, CEOs, recent immigrants/newcomers)
    • When it can be demonstrated that:
      1. There will be a cost saving, e.g., incentives/honoraria are less expensive than large numbers of re-contacts
      2. The use of incentives/honoraria is required to meet the study schedule

    Note: Under no circumstances are employees of the Government of Canada to receive monetary incentives or honoraria for participating in GC sponsored research in which GC employees are a specified target group.

  • Consider the use of non-monetary incentives wherever possible and appropriate. These can include: colouring books for children's surveys, or a copy of the survey findings (e.g., the executive summary, survey highlights) for special-audience research. However, the type of incentive selected must in no way influence potential answers to survey questions.
  • Monetary incentives/honoraria in the form of "cash" disbursements (either directly to the respondent or for example to a charity of their choice), gift certificates and entries in prize draws can be considered. The amount of the cash disbursement and gift certificates should be kept as low as possible, without compromising the effectiveness of the incentive/honorarium.
  • The use of incentives (monetary or non-monetary) or honoraria for a survey will also require decisions and documentation as to:
    • When incentives/honoraria will be provided, whether at initial contact or post-survey
    • To whom incentives/honoraria will be given, whether all contacts (whether or not they complete the survey) or only those who participate in the survey
    • How the incentives/honoraria will be paid out/distributed by the research firm or the Government of Canada and the associated cost for this disbursement (e.g., professional time and direct expenses)
  • When prizes or lotteries are used, the pertinent provincial and federal legal requirements must be met. Where consistent with these legal requirements, the name of the winners should be posted on the website of the responsible organization. It is recommended, out of concern for the privacy of these individuals, that they be identified only by their first name and the first letter of their last name. Records of how prizes or lottery amounts were awarded and of all disbursements must be maintained by the third party provider or government department responsible for the survey. These records should be available for audit to confirm that prizes or lottery amounts were awarded and distributed appropriately.

Fieldwork Monitoring

The Panel was asked to consider whether or not standards or guidelines are required related to:

  • Monitoring online surveys once the fieldwork is underway, in order to ensure:
    • The survey questionnaire is being administered as intended
    • Interview data are being recorded appropriately
  • Detecting and dealing with satisficing, that is:
    • Flatlining, straightlining or replicated answer patterns
    • Rushed answers
    • Illogical and unreasoned answers

Monitoring of Online Survey Fieldwork

There was general consensus by the Panel that monitoring during the fieldwork is important to detect and address any issues while the survey is in the field rather than once the fieldwork is completed.

The Panel recommended the following standard and guidelines:

Standard for Monitoring of Online Survey Fieldwork

  • Each online survey must be closely monitored throughout the fieldwork to ensure that responses are valid, that the survey is administered consistently throughout the data collection period, and that the responses are being recorded accurately.

Guidelines for Monitoring of Online Survey Fieldwork

  • Monitor the following aspects of fieldwork in order to identify any adjustments that might be needed during the fieldwork period:
    1. Survey invitation, for example, when attempted contacts related to participation are made and when they are responded to
    2. Survey response, for example:
      • Analysis of drop-offs: causes, respondent characteristics, at what points in the questionnaire drop-offs occur
      • Frequent (e.g., daily) review of completed interviews in terms of respondent sources and respondent characteristics
    3. Survey experience, for example:
      • Page loading times
      • Respondent contact with "help" desk/technical support: number and reasons
  • Monitor the quality control of all aspects of the survey experience from the respondent's perspective by, for example having in-house staff act as "proxy" quality assurance respondents during the course of the fieldwork or other methods.

Detecting and Dealing with Satisficing

Satisficing refers to completing a questionnaire (or part of a questionnaire) with little thought or care.

The ideal respondent "optimizes" as s/he answers every question, conducting a complete and unbiased search of memory and full integration of retrieved information. By contrast, satisficers' responses are formulated with reduced thoughtfulness, careless integration of retrieved information, and a haphazard selection of response choice.

The self-administered nature of online surveys is associated with some risk of satisficing behaviour - although satisficing is by no means a problem unique to online surveys.

Satisficing can occur for different reasons - for example:

  • The respondent may have something else they want to do and they want to get through the survey as quickly as possible.
  • The respondent may only be "in it for the money" and as a result do as little as possible to get through the survey to receive their incentive.
  • The respondent may not find the survey topic interesting or personally relevant and as a result not think much about the answers to the questions.
  • The questionnaire may be poorly designed and out of dislike or frustration the respondent may hurry through the questionnaire.
  • The questionnaire may exceed the cognitive abilities of the respondent, leading to frustration or to seemingly illogical answers.

IMRO (IMRO Guidelines for Best Practices in Online Sample and Panel Management) provides the following commentary on satisficing:

In general there are four types of "cheating or satisficing behaviors" that should be screened for.

  1. Flatlining, Straightlining or Replicated Answer Patterns: This behavior is demonstrated by people rushing through the survey giving little or no thought to the responses. Typically the survey should contain pattern recognition algorithms to detect a series of the same answers (e.g. all "4s" on a matrix question, or a replicated zig-zag pattern). Caution should be applied in tossing interviews for pattern replication unless there is little or no chance that the pattern can not represent a legitimate opinion.
  2. Rushed Answers: Respondents who take a survey at much faster than normal rate are probably not paying close enough attention to answers. A mean time to complete should be calculated during pre-testing. Anyone who completes at less than half this time should be considered suspect. Note: when tracking speed in real time, be sure to use the mode rather than the mean in terms of time standards. This is because respondent may be interrupted and finally complete hours later radically increasing the mean time to complete.
  3. Illogical and Unreasoned Answers: Another problem related to flatlining is related to not taking the time to read questions correctly. But because randomly answering questions can escape a pattern recognition program, additional tests are advisable. The degree to which people are paying attention can be tracked by performing tests on answers that should logically be very different. If, for instance, someone rates something as "too expensive", they should not also rate it as being "too cheap". This type of answer matching must be done on an ad hoc basis and instigated by the survey designer.
  4. Automated Survey Responses: Some potential survey cheaters use automated "keystroke replicators" or "field populators", which attempt to recreate many duplicate surveys using a single survey as a template. Panel providers should ensure that technological blocks are in place to detect and defeat these mechanisms prior to admission to the survey itself.

    Points for Consideration:

    Unlike phone and in-person interviewing, online research is self-administered. Whenever the survey is administered without a proctor, some level of error is more likely to occur. However, it is more likely that people will be lazy (straight-lining answers, for example) rather than being outright dishonest. Pattern recognition and real-time error checking can be employed to avoid this type of bad survey behavior.

With regard to the IMRO point about "automated survey responses": this refers to a respondent answering a survey multiple times. A standard has already been stated for this matter in the section of this report, Validation of Respondents.

The IMRO commentary was written for access panel providers. However, satisficing can potentially be an issue in any online survey, regardless of sample source. That said, it may be that the risk of satisficing behaviour differs by survey context and sample source. For example, it may be that commercial access panels should be held to a higher standard of screening for satisficing than some other sample sources:

  • Risk: Arguably, the risk of satisficing may be higher in commercial services, because they use respondents who are recruited on the basis of being paid to take surveys and these respondents are likely completing multiple surveys within a given period of time.
  • Resources: Commercial access panels may have invested in the computer and programming resources that would enable them to do some automated screening for satisficing. By contrast, this ability and resources to do automated checking may not be as readily available in other survey contexts. The latter may depend more on manual checking of data records for evidence of satisficing behaviour.

The Panel recommended the following standard and guidelines related to detecting and dealing with satisficing. Note: The Panel also recommended there should be a requirements (a) to at least consider whether and how to deal with satisficing at the research design phase of a project, and (b) to indicate the cost impact of measures that might be taken to deal with satisficing. A standard and guideline related to these two points have been added to Proposal Documentation.

Standards for Detecting and Dealing with Satisficing

"Rushed Answers"

  • Where it is technically feasible to record questionnaire completion time, each online survey should set a criterion defining "rushed answers" based on total time to complete the survey. The suspect records should be examined, and judgments made as to whether any of the suspect records should be deleted from the final sample. The following requirements also apply:
    • The number of records deleted should be stated in the survey report, together with a general indication of the bases for deletion
    • The final data file should contain an indicator variable flagging whether or not a record was classified as "rushed"
    • The data records for the deleted cases should be available upon request
    • The final data file must contain a variable giving the total time to complete the questionnaire (when measurement of total completion time is technically feasible)
    • The final data file must contain a variable giving the total number of questions answered

Guidelines for Detecting and Dealing with Satisficing

"Flatlining, Straightlining, and Replicated Answer Patterns"

  • Commensurate with risk and resources, it is recommended that survey providers implement procedures to detect flatlining, straightlining and replicated answer patterns, and have criteria for deleting from the final sample respondents for whom data quality is sufficiently suspect. In addition, if detection procedures are implemented:
    • There should be a clear, non-technical description of the answer patterns checked for, and the procedures used to detect them; the elimination criteria should be stated in the survey report
    • The final data file should contain an indicator variable flagging whether or not a data record was classified as suspect
    • The data records for the deleted cases should be available upon request

"Illogical and Unreasoned Answers"

  • Where it is judged useful, and judged appropriate in terms of impact on questionnaire length, consider introducing checks for illogical or unreasoned responding. If such checks are incorporated into the survey questionnaire:
    • The final data file should contain indicator variables flagging whether or not an instance of illogical or unreasoned responding occurred
    • The treatment of data from illogical/unreasoning responders should be described in the survey report
    • The data records for any deleted cases should be available upon request

Attempted Recontacts

The Advisory Panel considered standards and guidelines for recontact attempts for surveys:

  • using probability samples, or which are based on attempted census samples
  • using non-probability samples

Probability Sample/Attempted Census

In a probability survey or attempted census, the role of attempted recontacts of nonrespondents is to improve response rate, which in turn can lower the risk of non-response bias. In general, the greater the importance of a survey, the higher the target response rate should be, and therefore the number of attempted recontacts.

In the case of telephone surveys, the Government of Canada standard is to do a minimum of eight call-back attempts.

Depending on the online sample source and sampling process:

  • Attempted recontact may not be possible - for example, in river-sampling the concept of attempted recontact does not appear to apply
  • Attempted recontact may occur via different modes

Use of email for attempted recontacts would be the most common mode. However, if telephone dialing was part of the sampling process, then the attempted recontact might be done by telephone. Or, if an attempted email contact is "bounced back", then a recontact by telephone might be attempted, providing respondent telephone numbers are available. Other modes are at least theoretically possible as well - e.g., in-person (perhaps for an employee survey), mail or fax.

Different modes of attempted recontact have different levels of "intrusiveness". For example, telephone contacts that do not result in contact tend to have low intrusiveness (e.g., no answer, busy signal, answering machine with no message left). By contrast, email arguably has a higher level of intrusiveness, because it will be seen by the respondent (an email contact is perhaps analogous to a message left on an answering machine).

The Panel agreed that the number of attempted recontacts that would be considered "intrusive" might vary by study, and the decision was to state a minimum number of recontact attempts rather than a maximum. When setting a desired number of attempted recontacts for a survey, the intensity of the recontact effort should balance both potential improvements to sample size/composition and degree of intrusiveness in terms of respecting a person's right to choose not to participate. So, for example, for many online surveys the numeric standard for telephone call-backs (n=8) may not be appropriate for email recontacts.

There was general agreement by the Panel to adopt the following standard for recontact attempts for probability surveys and attempted census surveys.

Standard: Attempted Recontacts for a Probability Survey or Attempted Census

  • In the case of probability or attempted census surveys where recontact is feasible, attempts must be made to recontact nonrespondents using an appropriate contact procedure.
  • In general, the intensity of the recontact effort should balance both potential improvements to data quality and degree of intrusiveness in terms of respecting the person's right to choose not to participate.
  • Where recontact by email is judged the appropriate method, a minimum of two attempted recontacts should be made. Where recontact by telephone is judged the appropriate method, the standard for telephone surveys applies (i.e., minimum of 8 call-backs).

Non-probability Samples

In surveys based on non-probability samples, there is no clear relationship between response rate and data quality. For example, if the sample source is a non-probability-based access panel, a higher response rate does not guarantee any greater accuracy in generalizing to a population than does a lower response rate. This is because the data are coming from a group of respondents with no known probability of selection from the target population.

Nonetheless, there are some practical reasons to consider attempting recontacts when using a non-probability sample. The Panel felt the following guideline should be provided that gives examples of situations when attempted recontacts should be considered.

Guideline: Attempted Recontacts for a Non-probability Survey

  • In the case of surveys based on non-probability samples where recontact is feasible, consider making attempts to recontact respondents using an appropriate contact procedure. Reasons for making attempted recontacts could include, for example, achieving a desired sample size, reducing potential bias resulting from differences between "early" responders versus "late" responders, or selective recontacts to increase the sample size of certain sub-groups of interest (e.g., for analytical purposes, or to reduce the level of weighting required).
  • In general, the intensity of the recontact effort should balance both potential improvements to sample size/composition and degree of intrusiveness in terms of respecting the person's right to choose not to participate.

Validation of Respondents

The Advisory Panel considered standards and guidelines:

  • For ensuring respondents can answer a survey only once
  • To supplement a standard on validation of respondent identity already agreed to for access panels so that it also applies to online surveys that do not use access panels

Answer a Survey Only Once

For surveys generated from panel or email samples where respondents are known in advance, unique login passwords and cookies can be used to ensure that respondents complete the survey questionnaire only once.

For website visitor intercept surveys, where login passwords are not feasible and/or where cookies are not allowed, other measures will need to be put in place. Examples of measures that can be taken include:

  • Random selection of visitors to sites to reduce the chance of respondents completing the survey more than once
  • Post-data collection procedures to identify potential duplication, e.g., pattern matching of responses
  • Including a question asking respondents if they have already completed the survey

The Advisory Panel agreed to the following standard to ensure respondents answer a questionnaire only once.

Standard: Ensure respondents answer a survey only once

  • For any online survey research project, there should be documented procedures to limit the possibility that respondents can answer a survey for which they have been selected, more than once.

Validation of Respondent Identity

Validation of respondent identity can be important in online survey research, because the Internet medium creates opportunities for respondents to misrepresent themselves (although, the possibility of misrepresentation is not unique to the Internet medium).

The risk that some respondents may misrepresent themselves is particularly an issue for commercial access panels, where respondents might be tempted to create multiple identities so that they can participate in more surveys in order to make more money.

In the case of surveys using access panels, the following standard applies:

Panel owners should have a clear and published rule about validation checks. They should maintain records of any checks they carry out to validate that the panel member did indeed answer a survey.

For reference, IMRO (IMRO Guidelines for Best Practices in Online Sample and Panel Management) notes the following types of validation checks that access panels might implement:

Physical address verification at panel intake and during incentive fulfillment is a common and relatively fail-safe method for ensuring that panelists have not created multiple panel profiles in order to harvest invitations and incentives. Other forms of validation, such as profiling phone calls at intake and/or on a regular basis, can validate the panelist's demographic and professional characteristics.

At the point of panel intake, the panel owner should, at a minimum, verify that the physical address provided by a respondent is a valid postal address. In addition, through survey questions and through periodic updates of the panel profile, panel companies should check for consistency in responses concerning demographics and consumer behavior/preferences. The best methods for validating the identity of a respondent are those where the respondent is dependent on a unique address for redemption of incentive - such as standardized home address. A respondent can have multiple email, PayPal or "home" addresses, but won't benefit, if the check or prize can only be sent to a valid home address.

With regard to online surveys that do not use access panels as a sample source, the position taken with respect to validation of respondent identity should consider both risk and feasibility:

  • Risk: Some sample sources may be considered less risky than others in terms of likelihood of respondent misrepresentation. For example, risk may be considered low in an online employee survey using a list of employee email addresses and it might be decided that no validation checks of respondent identity need to be done.

    On the other hand, for example, when certain other types of samples are used, the risk of respondent misrepresentation might be considered higher and one would want to implement some validation checks.

  • Feasibility: It may be that the feasibility of validation checks - both in terms of what can be done and how much can be done - will vary by sample source.

The following guideline was agreed to by the Advisory Panel.

Guideline: Validation of Respondent Identity

  • When conducting an online survey using sample sources other than access panels, consider both the level of risk of respondent misrepresentation and the feasibility of conducting respondent identity validation checks. If it is considered desirable and feasible to do validation checks, the validation procedure should be documented, and a record of the outcomes of the checks should be maintained.

Document "The Advisory Panel on Online Public Opinion Survey Quality - Final Report June 4, 2008" Navigation