Chatbots Influence Political Opinions but Lack Accuracy
Full Transcript
Chatbots can sway people's political opinions but are substantially inaccurate, according to a study conducted by the UK government's AI Security Institute. This study involved nearly 80,000 British participants who engaged in conversations with 19 different AI models, making it the largest and most systematic investigation of AI persuasiveness to date.
The research, published in the journal Science, found that models like ChatGPT and Elon Musk's Grok, along with open-source models such as Meta's Llama 3 and Qwen by Alibaba, were tested on topics including public sector pay, strikes, and the cost of living crisis.
Participants reported their agreement with various political statements before and after their interactions with the AI. The study revealed that information-dense AI responses were the most persuasive, particularly when the models were prompted to use facts and evidence.
However, the models that provided the most factual information were often less accurate, suggesting that enhancing persuasiveness may compromise truthfulness, which could negatively affect public discourse.
On average, conversations lasted about 10 minutes, with participants exchanging around seven messages. The study also indicated that post-training of the models significantly improved their persuasiveness.
Kobi Hackenburg, a research scientist at AISI and co-author of the report, noted that simply increasing the amount of information provided by the models was more effective than sophisticated psychological persuasion techniques.
The researchers acknowledged barriers to AIs manipulating opinions, such as the time required for users to engage in lengthy conversations. They also mentioned psychological limits to human persuadability, questioning whether chatbots could have the same impact in real-world scenarios where attention is divided.
The findings raise important concerns about the reliability of AI in shaping public opinion and the potential consequences of misinformation.