CRBC News
Technology

People Are Getting Their News From AI — And It’s Quietly Shaping Public Opinion

People Are Getting Their News From AI — And It’s Quietly Shaping Public Opinion
When a bot brings you the news, who built it and how it presents the information matter.Zentangle/iStock via Getty Images

Large language models are becoming a primary gateway to news, creating summaries and headlines that reach readers before traditional moderation can act. Research by Adrian Kuenzler and Stefan Schmid (accepted to Communications of the ACM) finds that these models exhibit communication bias: they may emphasize some perspectives while downplaying others, even when facts are accurate. Persona-based steerability and sycophancy cause models to tailor emphasis to perceived user identities. The authors argue that beyond regulation, mitigating these effects requires competition, transparency and meaningful user participation.

Meta's decision to end its professional fact-checking program triggered sharp criticism across technology and media circles, with observers warning that removing expert oversight risks undermining trust in online information. But that debate has largely missed a subtler, rapidly growing force: large language models (LLMs) are increasingly generating the summaries, headlines and initial story drafts readers encounter, long before traditional moderation systems can intervene.

How LLMs Influence What We Read

LLMs are not merely neutral conveyors of facts. Embedded in chatbots, virtual assistants, news sites and search features, they act as primary gateways to information. Research — including a forthcoming paper accepted by Communications of the ACM by Adrian Kuenzler and Stefan Schmid — shows that LLMs display what the authors call communication bias. That means models can foreground some perspectives while downplaying or omitting others, shaping how people form opinions even when the underlying facts are accurate.

Persona, Sycophancy and Subtle Framing

An emerging property called persona-based steerability helps explain how this happens. Models infer attributes of users from prompts and tailor tone and emphasis accordingly. For example, when asked about the same climate law, an LLM might stress environmental shortcomings for a self-described activist, but highlight regulatory costs for a business owner — both factually correct, yet framed differently.

This behavior can resemble flattery and is often called sycophancy. But communication bias runs deeper: it reflects who builds these systems, the datasets and sources chosen for training, and the incentives guiding development. When a few developers dominate the market, small biases multiply at scale and can meaningfully distort public discourse.

Regulation Helps — But Isn’t Enough

Policymakers are responding: the EU's AI Act and Digital Services Act aim to increase transparency and accountability. Yet neither is specifically tailored to the nuanced, interaction-driven pattern of communication bias. Common regulatory approaches — prelaunch audits or post-deployment bans on harmful outputs — can catch egregious errors but are less effective at detecting the subtle, cumulative framing effects that emerge through repeated user interactions.

What Works Better

Beyond regulation, mitigating communication bias requires structural changes: protecting competition so multiple LLM approaches can coexist, boosting transparency about data and model behavior, and enabling meaningful user participation in how systems are designed and tested. These measures help counter incentives that favor a narrow set of models and design choices.

Key finding: Communication bias can shift public perception even when model outputs are factually accurate. Addressing it demands more than correcting training data — it requires changing the market dynamics and governance around the tools that shape public information.

Ultimately, AI will not just influence the facts we find but also the framing of debates and the collective vision of our future. Thoughtful policy, market safeguards and engaged users will determine whether that influence strengthens democratic information ecosystems or deepens subtle distortions.

Author: Adrian Kuenzler, University of Denver; University of Hong Kong.

Republished From: The Conversation — an independent nonprofit providing facts and analysis.

Disclosure: Adrian Kuenzler reports no financial conflicts or affiliations that would benefit from this article beyond academic appointments.

Related Articles

Trending