“One conversation with an LLM has a reasonably meaningful effect on salient election selections,” says Gordon Pennycook, a psychologist at Cornell University who worked on the study. LLMs can persuade people more effectively than political advertisements because they generate way more information in real time and strategically deploy it in conversations, he says.
For the paper, the researchers recruited greater than 2,300 participants to have interaction in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either one in all the highest two candidates, was surprisingly persuasive, especially when discussing candidates’ policy platforms on issues similar to the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris became barely more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. That was roughly 4 times the measured effect of political advertisements through the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters 2.3 points toward Trump.
In similar experiments conducted through the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an excellent larger effect. The chatbots shifted opposition voters’ attitudes by about 10 points.
Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. However the researchers found that the chatbots, which used a variety of models including variants of GPT and DeepSeek, were more persuasive after they were instructed to make use of facts and evidence than after they were told to not accomplish that. “Individuals are updating on the premise of the facts and knowledge that the model is providing to them,” says Thomas Costello, a psychologist at American University, who worked on the project.
The catch is, a number of the “evidence” and “facts” the chatbots presented were unfaithful. Across all three countries, chatbots advocating for right-leaning candidates made a bigger variety of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which implies they reproduce real-world phenomena—including “political communication that comes from the proper, which tends to be less accurate,” in keeping with studies of partisan social media posts, says Costello.
In the opposite study published this week, in , an overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with nearly 77,000 participants from the UK on greater than 700 political issues while various aspects like computational power, training techniques, and rhetorical strategies.
Probably the most effective option to make the models persuasive was to instruct them to pack their arguments with facts and evidence after which give them additional training by feeding them examples of persuasive conversations. In actual fact, essentially the most persuasive model shifted participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist on the UK AI Security Institute, who worked on the project.
