CRBC News
Technology

Experts Warn Generative AI Is Accelerating the Collapse of Shared Reality

Experts Warn Generative AI Is Accelerating the Collapse of Shared Reality
Photo Credit: iStock

Summary: Researchers warn that generative AI is making image- and video-based verification increasingly unreliable, accelerating the erosion of a shared, verifiable reality. Platform policy rollbacks and tools that strip or add AI watermarks have worsened the problem, while large AI data centers are straining power grids. Experts cite "cognitive exhaustion" and growing public disengagement and call for updated media literacy and coordinated policy responses.

Misinformation and disinformation have long been problems online, but researchers tell NBC News that the rapid spread of generative AI tools has sharply raised the stakes. Experts now warn that increasingly convincing AI-produced images and video threaten the basic shared facts that societies rely on.

After the 2016 U.S. general election, a surge of false content on social platforms prompted Senate hearings, academic studies and a large-scale effort to strengthen platform safety. Today, however, researchers say the landscape has changed: easy-to-use generative tools make lifelike deepfakes widely accessible, and some third-party utilities can remove or add watermarks that identify material as AI-generated—further blurring the line between real and fabricated content.

How realistic deepfakes are changing verification

Tools such as Sora and other consumer-facing generators can produce polished video and audio with minimal technical skill. As Jeff Hancock, founding director of the Stanford Social Media Lab, warns:

"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there."

Users traditionally looked for obvious "tells"—small visual errors such as an incorrect number of fingers—but rapid improvements in AI models are making those cues unreliable.

Platform policies and real-world consequences

After 2016, major platforms introduced stronger trust-and-safety measures, though some measures were later scaled back. Facebook reduced certain initiatives, and after Elon Musk's acquisition and rebranding of Twitter to "X," many counter-disinformation programs were paused. Meanwhile, AI-generated clips have already misled audiences and newsrooms—sometimes spreading widely without context, such as during extreme weather events like Hurricane Melissa.

Energy and infrastructure impacts

Concerns extend beyond truth decay. The fast expansion of compute-intensive AI has increased demand on power grids: large AI data centers strain capacity, contribute to higher electricity costs, and prompted the U.S. Department of Energy to issue warnings about grid limits tied to this scaling.

The civic and psychological toll

Studies after 2016 showed people often choose news that confirms their beliefs regardless of accuracy. University of Rhode Island Professor Renee Hobbs describes a related psychological effect—"cognitive exhaustion" or a "firehose" of propaganda—where relentless exposure to falsehoods produces doubt, anxiety and ultimately disengagement.

"If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response," Hobbs said. "When people stop caring about whether something's true or not, the danger is not just deception; it's the collapse of even being motivated to seek truth."

What experts recommend

Researchers and educators are racing to incorporate generative AI into media literacy programs, teaching people practical verification skills and critical consumption habits. But many warn that individual efforts will be hard-pressed to keep pace with the volume of synthetic content. As a complement to education, advocates urge collective action: asking citizens to press lawmakers for clearer regulations, resources for platform oversight, and support for verification tools at scale.

Takeaway: Generative AI has made visual verification harder, heightened the need for updated media literacy, and created new infrastructure pressures. Without coordinated policy and public education, experts warn, civic trust and the habit of seeking truth are at risk.

Help us improve.

Related Articles

Trending