Leveraging Large Language Models for Efficient Persuasion Detection in Online Discourse
Belief revision is the process of updating one’s beliefs when presented with new evidence, while persuasion aims to change those beliefs. Traditional models of belief revision focus on face-to-face interactions, but with the rise of social media, new models are needed to capture belief revision in text-based online discourse. Here we utilise large language models (LLMs) to develop a model that predicts successful belief revision using features derived from psychological studies.
Our approach leverages LLMs for dimension reduction, using generated ratings to build a random forest classification model that predicts whether a message will result in belief change. Results show that serendipity and willingness to share are the top-ranking features in the model. Our findings provide insights into the characteristics of persuasive messages and demonstrate how LLMs can enhance models based on psychological theory. Given these insights, this work has broader applications in areas like online influence detection and misinformation mitigation, and potential ways to measure the effectiveness of online narratives.