CRBC News

Stanford Extension Reduces Polarization on X by Reordering — Not Removing — Posts

Stanford researchers created a browser extension that reduces partisan hostility on X by reordering posts instead of removing them. In a one-week field trial with 1,256 users ahead of the 2024 U.S. election, downgrading anti-democratic and highly polarizing posts led to more positive attitudes toward political opponents. The effect was observed among both liberals and conservatives. Experts say the approach shows promise but should be replicated and ethically evaluated in other contexts.

Stanford Extension Reduces Polarization on X by Reordering — Not Removing — Posts

Researchers at Stanford developed a browser extension that noticeably reduced partisan hostility on X by changing the order of posts in users' feeds rather than hiding or deleting content.

Study and method

The team built a browser extension that used an AI language model to evaluate posts in real time and dynamically reorder each participant's feed. The system identified content expressing anti-democratic sentiments or partisan hostility — including calls for violence or jailing political opponents — and pushed those posts lower in the timeline without removing them.

The field trial ran for one week ahead of the 2024 U.S. presidential election with 1,256 X users who were randomly assigned to two parallel conditions: one in which polarizing posts were ranked higher and another in which those posts were ranked lower.

Key findings

Participants whose feeds downgraded anti-democratic and highly partisan posts showed more positive attitudes toward the opposing political party. The effect appeared across the political spectrum and was observed among both self-identified liberals and conservatives.

"Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, professor of computer science at Stanford and lead author of the study. "We have demonstrated an approach that lets researchers and end users have that power."

Expert perspective

Josephine Schmitt, scientific coordinator at the Center for Advanced Internet Studies, said the study shows that even small algorithmic changes can shift emotions between political groups and influence affective polarization. Philipp Lorenz-Spreen of the Max Planck Institute for Human Development cautioned that results may not generalize to countries without a clear two-party divide or where X has limited reach, and recommended replication in different national contexts.

Limitations and implications

The experiment was conducted without cooperation from X and focused on a single platform and a short, one-week intervention. The researchers did not delete or block content — only adjusted rankings — which reduces censorship concerns but raises other questions about who should control algorithmic curation. The findings suggest a promising, low-friction approach to reduce affective polarization, but broader testing and ethical review are needed before deploying similar tools at scale.

Implications: The study points toward possibilities for user-controlled or independently audited ranking systems that could foster healthier civic discourse, while also highlighting the need to guard against manipulation and to test effects across different countries and media ecosystems.

Similar Articles