CRBC News
Politics

AI Chatbots Can Sway Political Views — But Their Most Persuasive Claims Often Contain Inaccuracies

AI Chatbots Can Sway Political Views — But Their Most Persuasive Claims Often Contain Inaccuracies

The Science study shows that conversational AI chatbots can shift political opinions and retain part of that effect after one month. Researchers recruited nearly 77,000 U.K. adults to interact with 17 different large language models and found chatbots were most persuasive when they supplied extensive, detailed information. However, about 19% of chatbot claims were rated predominantly inaccurate, and the most persuasive strategies often produced the least accurate statements, raising concerns about misuse and the truthfulness of AI persuasion.

AI Chatbots Can Change Minds — And Not Always Truthfully

A large new study, published in the journal Science, finds that conversational AI chatbots can meaningfully shift people's political opinions — and that the chatbots are most persuasive when they supply large amounts of detailed information. The research also raises a troubling trade-off: the most persuasive models and prompting strategies tended to produce the least accurate claims.

How the Study Worked

Researchers recruited nearly 77,000 adults in the United Kingdom via a crowdsourcing platform and paid them to interact with a variety of AI chatbots. The team tested 17 different large language models from providers including OpenAI, Meta and xAI, and asked participants about British political topics such as taxes and immigration. Regardless of participants' initial views, chatbots were instructed to try to move them toward an opposing position.

Main Findings

The study found that chatbots frequently succeeded in persuading participants and that interactive, back-and-forth conversation was substantially more effective than a short, static AI-written message. Depending on the model, the conversational format was 41% to 52% more persuasive than a 200-word AI message. The effect was durable: between 36% and 42% of the persuasion persisted when participants were re-surveyed one month later.

Notably, the chatbots were most persuasive when they provided large amounts of in-depth information rather than relying primarily on moral appeals or highly personalized messaging. The authors suggest that an AI’s ability to generate extensive, on-demand information could allow it to out-persuade even elite human persuaders in some contexts — though the study did not directly compare chatbots with human debaters.

Accuracy Concerns

Alongside the persuasive power, the researchers documented significant accuracy problems. Approximately 19% of chatbot claims in the study were rated as “predominantly inaccurate.” The authors observed a decline in accuracy among the most recent, largest frontier models: claims from OpenAI’s GPT-4.5 (released in February) were, on average, less accurate than claims from some smaller or older OpenAI models tested. The data collection was completed before OpenAI released GPT-5.1.

“The most persuasive models and prompting strategies tended to produce the least accurate information,” the paper states, warning that optimizing systems for persuasiveness may come at a cost to truthfulness.

Risks and Context

The authors caution that a highly persuasive but inaccurate chatbot could be abused by unscrupulous actors — for example, to promote radical ideologies or to inflame political unrest. The team behind the paper includes researchers from the U.K.-based AI Security Institute, the University of Oxford, the London School of Economics, Stanford University and MIT, with funding from the British government’s Department for Science, Innovation and Technology.

Independent experts called the study an important contribution to understanding how large language models (LLMs) interact with democratic processes. Some noted that if competing actors on both sides of an issue used equally persuasive tools, effects might partially cancel out — though the information environment could still be degraded if accuracy suffers.

Limitations

The authors emphasize that the experiment’s controlled, incentivized setting does not fully mirror real-world political communication. It remains unclear how often people would voluntarily engage in sustained, cognitively demanding political exchanges with AI outside a survey environment.

Takeaway

The study shows both the power and the danger of conversational AI in political contexts: chatbots can change opinions and produce durable effects, but their most compelling answers may contain misleading or incorrect information. The findings strengthen calls for transparency, stronger accuracy safeguards and further research on how to align persuasiveness with truthfulness.

Similar Articles