9th Biennial ACSPRI Social Science Methodology Conference

Reducing the Bias from Probability and Nonprobability Online Panels by Excluding Satisficers Prior to Weighting
Friday 29th November 2024, 09:00–09:15 (Australia/Melbourne), Sutherland Room

As nonprobability samples become more and more prevalent throughout survey research, it is important to understand the impacts of the choice of sampling frame on the many aspects of data quality. Satisficing is one such aspect that is widely researched, but less so in comparative studies of probability and nonprobability sampling frames.
The Australian Comparative Study of Survey Methods (ACSSM), fielded in December 2022, systematically trialled eight sampling frames and survey modes: web mode using a probability-based online panel, Life in Australia™ (n = 585), a video interviewing arm recruited from Life in Australia™ (n = 600), a cell phone RDD CATI stream (n = 803), RDD text-to-web (n = 600), and four nonprobability access panels (n = between 853 and 892 per panel). The ACSSM follows an earlier Australian comparative study fielded in 2015, which compared various probability and nonprobability modes (Lavrakas et al. 2022), and its replication on Life in Australia™ in 2017 (Kaczmirek et al. 2019).
Here we investigate satisficing (sometimes known as careless or insufficient effort responding), assessing the degree of prevalence through various metrics. This analysis includes comparisons between the sampling frames, with particular attention to the differences between the comparable online probability and nonprobability frames. We use several data quality indicators that are associated with satisficing and careless response—straightlining, speeding, logic checks, outlier analysis, item nonresponse, and midpoint selections, along with a composite metric. Results show differences between probability and nonprobability frames, with a higher proportion of satisficing consistently observed among nonprobability online panels. Further, we explore the bias of survey estimates after excluding cases who are flagged for satisficing and observe a significant bias reduction among nonprobability online panels. Our research indicates that in addition to the well-known errors of representation associated with nonprobability online panels with respect to coverage and nonresponse errors (Baker et al. 2010), there is also a higher level of measurement error.

Sam is a statistician at the Social Research Centre, with a focus on survey methodology and statistical modelling. He specialises in survey weighting and sample design, with extensive experience in Bayesian inference and complex survey weighting. Regular contributions to SRC projects include sample design and allocation, weighting and calibration of probability and non-probability surveys, bias and non-response analysis, and more. Sam’s recent research in survey methodology includes investigations into calibration and blending, panel conditioning, satisficing, and Bayesian small area estimation. He has presented research in Australia and overseas at the European Survey Research Association conference.

This speaker also appears in: