Sair Buckle
Sair Buckle is a dynamic PhD student at Charles Sturt University's AI & Cyber Futures Institute, where she holds a funded placement in the Behavioural Science team. Her research focuses on using technology to enhance diversity, equity, and inclusion within Commonwealth government and workplaces. Sair brings a wealth of experience from her diverse background in advertising, marketing, product development, and entrepreneurship. Her unique blend of professional expertise and commitment to inclusivity positions her as a valuable contributor to the field of behavioral science and technology-driven organisational change.
Session
Commonwealth governments are expected to set the benchmark for workplace standards. Howeveer there is a significant need for more robust mechanisms to ensure psychological safety to prevent bullying, harassment, and discrimination within their own ranks. This discrepancy between the expectations placed on general workplaces and the practices within the highest levels of government warrants further investigation. This paper aims to explore historic and current approaches in identifying and measuring bullying within organisations, with a focus on Western Commonwealth governments. It has analysed bullying language within the Australian Commonwealth government by labelling three years of Question Time transcripts between 2018-2021 from the House of Parliament, utilising SafeWork Australia's definition of bullying. Additionally, the study assesses the feasibility of using and training Large Language Models (LLMs), RoBERTa, MACAS and ChatGPT-4 to detect bullying language in Hansard data across Commonwealth governments. The LLM's performed well when fine-tuned, with potential for further optimisation to enhance its classification capabilities.
Through these objectives, the research contributes to the field of behavioural science by identifying issues in current measurement approaches, providing empirical evidence on the prevalence of bullying language in the Australian Commonwealth Government, and evaluating the potential of LLMs to enhance real-time detection and prevention of bullying. This study aims to inform future policy changes and improve enforcement mechanisms, ultimately enhancing transparency and accountability within government institutions.