Researchers from the University of Miami and the Network Contagion Research Institute report that ChatGPT can quickly adopt and amplify authoritarian viewpoints after brief prompts. Experiments on GPT-5 family models showed short or longer opinion inputs shifted the model’s agreement with left- and right-wing authoritarian statements and increased perceived hostility in neutral faces. The report — not yet peer-reviewed — compared model outputs with more than 1,200 human respondents and calls for more research into model design and human–AI interaction frameworks.
Study: ChatGPT Can Be Primed Toward Authoritarian Views After A Single Prompt

Researchers from the University of Miami and the Network Contagion Research Institute (NCRI) report that OpenAI’s ChatGPT can quickly adopt and amplify authoritarian viewpoints after brief, seemingly innocuous user prompts. The unpublished report — which examined multiple versions of the GPT-5 family — suggests short interactions can shift the model’s subsequent answers and even affect how it interprets neutral images of faces.
What the Researchers Did
The team conducted three experiments in December using two ChatGPT variants based on GPT-5 and GPT-5.2. In one core experiment they opened fresh chat sessions and supplied text the researchers labeled as reflecting left- or right-wing authoritarian tendencies. Inputs ranged from very short snippets (as brief as four sentences) to full opinion articles.
After priming the model, the researchers measured its responses using a battery of statements similar to standardized psychological quizzes, recording the model’s agreement with items aligned to authoritarian thinking. They also surveyed more than 1,200 human participants in April and compared those responses to the model’s outputs.
Key Findings
Across trials the report found consistent shifts toward authoritarian responses after priming. For example, feeding an essay the team categorized as left-authoritarian (calling for abolishing policing and capitalist structures) produced stronger agreement with statements such as whether the wealthy should be stripped of property or whether reducing inequality should override free-speech concerns. Conversely, a right-authoritarian article emphasizing order and strong leadership more than doubled the model’s agreement with intolerant or censorship-friendly statements.
In a separate test, the team asked the model to rate the hostility of neutral facial images after ideological priming. The model’s perceived hostility rose by 7.9% following the left-wing prompt and by 9.3% after the right-wing prompt, the report says.
Authors' Interpretation and Responses
“Something about how these systems are built makes them structurally vulnerable to authoritarian amplification,” Joel Finkelstein, NCRI co-founder and a lead author, told NBC News. He argued the effect may stem from model architecture and training processes that emphasize hierarchy, authority and threat detection.
OpenAI told NBC News that ChatGPT is designed to be objective and to present multiple perspectives, and that responses will shift when users push the system toward a viewpoint. A spokesperson said the company actively measures and works to reduce political bias within existing safety guardrails.
Limitations And Expert Reaction
The report has not been peer-reviewed. External experts cautioned that the study raises important concerns but also has methodological limits. Ziang Xiao, a Johns Hopkins computer science professor not involved in the work, highlighted possible confounds such as web-sourced training data and noted the study tested only a small set of models and inputs and focused on OpenAI’s ChatGPT rather than other systems like Anthropic’s Claude or Google’s Gemini.
The authors themselves call for more research into how model architecture, training data and interaction dynamics shape conversational AI behavior, and they warn that such effects could have consequences in settings where AI evaluates people — for example, hiring or security screening.
Why It Matters
If confirmed, the findings suggest conversational AIs can not only echo user prompts but amplify them into more extreme positions, raising concerns about polarization, manipulation and the use of AI in sensitive decision-making contexts. The researchers urge further study and development of frameworks to govern human–AI interactions.
Note: This article summarizes an NBC News report on the unpublished NCRI/University of Miami study and includes responses from the report’s authors and external experts. The study’s results should be interpreted cautiously until peer review and replication are completed.
Help us improve.





![Evaluating AI Safety: How Top Models Score on Risk, Harms and Governance [Infographic]](/_next/image?url=https%3A%2F%2Fsvetvesti-prod.s3.eu-west-1.amazonaws.com%2Farticle-images-prod%2F696059d4e289e484f85b9491.jpg&w=3840&q=75)




























