Advances in generative AI are accelerating the spread of highly convincing synthetic images and videos, undermining the instinctive trust people place in visual media. Experts warn manipulated content spreads fastest during fast-breaking events and is amplified by platform incentives that reward engagement. Cognitive fatigue, partisan bias and the limits of current detection tools make verification harder; researchers urge stronger media literacy, better detection and a cultural shift toward skeptical verification.
AI Is Eroding Trust Online — Experts Warn Deepfakes Are Blurring Reality

For decades many people relied on a simple instinct: if you could see it, it must be true. That default trust is fraying as artificial intelligence makes convincing fakes commonplace and blurs the line between authentic footage and synthetic material.
Fast Events, Fast Fakes
In the first week of 2026, media analysts say, AI tools accelerated the circulation of synthetic images and doctored videos across social platforms. Coverage tied to President Donald Trump’s operation related to Venezuela produced a wave of AI-generated photos, repurposed archival footage and manipulated stills. After an Immigration and Customs Enforcement officer fatally shot a woman in her car, social feeds were flooded with what appears to be an AI-altered frame based on real video; other users posted AI attempts to digitally remove the officer’s mask.
Why The Problem Is Growing
Experts point to platform economics — where apps reward creators for engagement — as a driver that encourages recycling and sensationalizing existing media to boost reach and emotion during viral moments. That blend of misinformation and genuine evidence accelerates an erosion of trust online, especially during fast-breaking stories when verified information is scarce.
Jeff Hancock, founding director of the Stanford Social Media Lab: “As we start to worry about AI, it will likely, at least in the short term, undermine our trust default — that is, that we believe communication until we have some reason to disbelieve. That’s going to be the big challenge.”
Credibility crises are not new — from the misinformation surge around the 2016 election to the spread of propaganda after the printing press — but generative AI adds speed, scale and realism. Researchers warn that familiar heuristics for spotting fakes (for example, looking for weird fingers or simple visual glitches) are quickly losing effectiveness as models improve.
Cognitive Fatigue, Partisanship And Real-World Harm
Renee Hobbs, a professor of communication studies at the University of Rhode Island, says cognitive exhaustion makes it harder for people to separate real from synthetic material; disengagement can become a coping mechanism when doubt is constant. Hany Farid, a computer science professor at UC Berkeley, finds that people are roughly as likely to call real content fake as to accept fabricated content — and partisan framing substantially worsens accuracy.
Siwei Lyu, who helps maintain the open-source DeepFake-o-meter, adds that likenesses of familiar figures — celebrities, politicians, friends and family — are especially persuasive and therefore more likely to mislead as they become more realistic. Deepfakes have already surfaced in courtrooms and fooled officials; last year, numerous AI-generated clips falsely portrayed Ukrainian soldiers surrendering en masse, illustrating how quickly false narratives can spread.
What Experts Recommend
Researchers and platform leaders point to a combination of actions: stronger media-literacy education, transparent platform policies, improvements in detection tools, and a cultural shift toward healthy skepticism about who shares media and why. The Organization for Economic Co-operation and Development plans a global Media & Artificial Intelligence Literacy assessment for 15-year-olds in 2029 to help schools teach critical skills.
Bottom line: Society will likely move from assuming visual media is real by default to approaching it with skepticism and verification habits — but that transition will be uncomfortable and requires coordinated action by educators, platforms, researchers and users.
Help us improve.































