Sam Slamowicz
Sam is a statistician at the Social Research Centre, with a focus on survey methodology and statistical modelling. He specialises in survey weighting and sample design, with extensive experience in Bayesian inference and complex survey weighting. Regular contributions to SRC projects include sample design and allocation, weighting and calibration of probability and non-probability surveys, bias and non-response analysis, and more. Sam’s recent research in survey methodology includes investigations into calibration and blending, panel conditioning, satisficing, and Bayesian small area estimation. He has presented research in Australia and overseas at the European Survey Research Association conference.
Sessions
Traditional methods for making estimates from surveys rely on random (probability) samples and high response rates. As the costs of such surveys increase and as response rates decrease, the principles on which traditional methods are based are becoming harder to justify for samples for which the selection and response mechanisms are unknown (or unknowable). A range of statistical techniques have been developed for these situations, to try and model the selection mechanisms and enable estimates and inferences with respect to the target population.
This workshop will give practical advice and examples on design, weighting and estimation for contemporary surveys that do not meet traditional probability methods assumptions. Topics covered will include the following:
* Design considerations including sample composition, questionnaire items, data quality and benchmarks;
* Methods for estimating base weights, making use of a probability reference sample if one is available;
* Calibration to benchmarks; and
* Bias and variance assessment.
As nonprobability samples become more and more prevalent throughout survey research, it is important to understand the impacts of the choice of sampling frame on the many aspects of data quality. Satisficing is one such aspect that is widely researched, but less so in comparative studies of probability and nonprobability sampling frames.
The Australian Comparative Study of Survey Methods (ACSSM), fielded in December 2022, systematically trialled eight sampling frames and survey modes: web mode using a probability-based online panel, Life in Australia™ (n = 585), a video interviewing arm recruited from Life in Australia™ (n = 600), a cell phone RDD CATI stream (n = 803), RDD text-to-web (n = 600), and four nonprobability access panels (n = between 853 and 892 per panel). The ACSSM follows an earlier Australian comparative study fielded in 2015, which compared various probability and nonprobability modes (Lavrakas et al. 2022), and its replication on Life in Australia™ in 2017 (Kaczmirek et al. 2019).
Here we investigate satisficing (sometimes known as careless or insufficient effort responding), assessing the degree of prevalence through various metrics. This analysis includes comparisons between the sampling frames, with particular attention to the differences between the comparable online probability and nonprobability frames. We use several data quality indicators that are associated with satisficing and careless response—straightlining, speeding, logic checks, outlier analysis, item nonresponse, and midpoint selections, along with a composite metric. Results show differences between probability and nonprobability frames, with a higher proportion of satisficing consistently observed among nonprobability online panels. Further, we explore the bias of survey estimates after excluding cases who are flagged for satisficing and observe a significant bias reduction among nonprobability online panels. Our research indicates that in addition to the well-known errors of representation associated with nonprobability online panels with respect to coverage and nonresponse errors (Baker et al. 2010), there is also a higher level of measurement error.