8th Biennial ACSPRI Social Science Methodology Conference

Using slider scales for fuzzy-set qualitative comparative analysis (fsQCA): a fuzzy-set-theoretic approach to measuring degrees of membership
Wednesday 23rd November 2022, 13:20–13:35 (Australia/Melbourne), Zoom Breakout Room 2

The Likert scale enjoys common usage in fsQCA as a standardised tool for operationalising conceptual models (e.g. TAM, UTAUT) and collecting quantitative data to be calibrated into fuzzy scores. However, the limited discrete options can restrict the nuance of responses which may lie in-between or beyond provided answers. Respondents may select options that do not accurately reflect the magnitude of their views and there is an increased risk of the ceiling effect causing limited data variation. Theoretical concerns exist with using direct algorithmic calibration (involving supply of qualitative anchors for defining values constituting full membership, full nonmembership, and cross-over point) on ordinal measured data from Likert scales as it is intended for interval and ratio data. The slider scale is proposed as a more robust instrument leveraging set-theoretic and fuzzy logic principles for use in fsQCA.

The slider scale is a continuous rating scale for measuring ratio data (spanning from 0 to 100 in increments of one) whereby respondents drag a digital marker along a horizontal quantitative scale to indicate their response. Qualitative descriptors are distributed across the slider scale in a predetermined order as anchors covering different ranges of raw values and signposting different levels of membership scores. These anchors comprise a rubric description that provides respondents with a clear criteria to self-assess their degree of membership in some variable of interest and reduces uncertainty around how the scale is interpreted. This is useful for variables measurable as single-item scales or efficiently aggregating multiple items measuring the same variable dimension into one single item. The broader response continuum enables respondents to express their answers with greater precision and granularity, facilitating nuanced differentiation between membership scores. Notably, scores and qualitative anchors are easily mappable to ordinal scales to accommodate larger-N studies.

Studies have shown the measurement quality of slider scales are comparable to the reliability and validity of Likert scales. Slider scales can be used without materially compromising data quality, are less susceptible to the ceiling effect, and are more likely to yield normally distributed values. Importantly, there is the risk of systematic measurement error caused by the starting position of the digital marker and higher non-response rates associated with greater effort required to answer questions. These can easily be addressed through careful survey design and piloting feedback.

The ratio level of measurement of the slider scale makes it suitable for direct calibration and avoids any theoretical issues with using ordinal data from Likert scales. Raw values within the scale range of 0 to 100, representing the progression from full nonmembership to full membership, naturally captures the monotonic property of fuzzy sets ranging from 0 to 1 when transformed by some logistic function. The resulting set membership scores and supplied anchors (whether using variable distribution or substantive criteria) will be more fine-grained from using numeric over categorical data. This reasoning further applies to manual and indirect calibration.

A concrete application of the slider scale is presented as part of a pilot study investigating the causal complexity underpinning clinician acceptability of an artificial intelligence-based diagnostic support tool in real-world medical practice. The findings demonstrate how it can operationalise conceptual models and facilitate set-theoretic, configurational analysis particularly for exploratory research where dimensionality is high and sample size is low.

Authors:
- Mr David Hua - dhua9758@uni.sydney.edu.au
- Dr Neysa Petrina - neysa.petrina@sydney.edu.au
- Dr Simon Poon - simon.poon@sydney.edu.au

Recording link: https://acspri-org-au.zoom.us/rec/share/bMvRMHyt1mxBmr3v9JEMtzw40XMlR7ie_q-pDBV0BwdJ1Nf47TlmI5mkRiBW5d4n.4ITWOzq9QFTcwMgN?startTime=1669170286000


Do NOT record this presentation – yes

I am an interdisciplinary researcher whose expertise lies at the intersection of computer science and law. I am passionate about the interplay of technology and human society and the impact this dynamic has on our lives. My work focuses on understanding how artificial intelligence systems can be developed, evaluated, and regulated to be trustworthy and responsible so that they may augment human decision-making and optimise outcomes for the social good and public interest. This demands synthesising the insights and methods of different disciplines to arrive at a holistic, nuanced solution to wicked problems that cannot be reached using the perspective of one discipline. The configurational, cross-case approach of qualitative comparative analysis is of particular interest and value to my work.