AI-generated videos are increasingly deceiving mainstream outlets and even AI experts. Fox News published a report based on clips later flagged as likely AI-made and revised its article after critics pointed this out. Yann LeCun shared a short clip that resembles content produced by OpenAI's Sora, and his intent remains unclear. The episode underscores weak industry guardrails and the need for stronger verification and media literacy.
AI Video Fakes Are Fooling Newsrooms — and Even Top AI Experts
AI-generated videos are increasingly deceiving mainstream outlets and even AI experts. Fox News published a report based on clips later flagged as likely AI-made and revised its article after critics pointed this out. Yann LeCun shared a short clip that resembles content produced by OpenAI's Sora, and his intent remains unclear. The episode underscores weak industry guardrails and the need for stronger verification and media literacy.

AI video fakes are now fooling newsrooms — and possibly AI experts
Short version: Several mainstream outlets recently published or aired clips later flagged as likely AI-generated, and a high-profile AI researcher shared a similar-looking clip on social media. The episode highlights how consumer-facing video tools make realistic fakes easier to produce and harder to spot.
On Friday, Fox News published a story headlined "SNAP beneficiaries threaten to ransack stores over government shutdown," citing social-media clips suggesting angry reactions to potential benefit delays. After online commentators pointed out that at least some of the clips appeared to be generated by AI, Fox revised the story and replaced the headline with "AI videos of SNAP beneficiaries complaining about cuts go viral." An editor's note now reads: "This article previously reported on some videos that appear to have been generated by AI without noting that. This has been corrected." Fox did not respond to requests for comment.
Newsmax aired a similar segment. These errors may be especially tempting for outlets whose audiences expect certain narratives, but the broader takeaway is that believable AI clips can deceive many people, including professional journalists and experts — particularly when the content confirms existing biases.
On Sunday, Yann LeCun, Meta's chief AI scientist, shared a short clip that appears to show a New York City police officer telling ICE agents to "back off, now." The clip exhibits traits common to content made with consumer AI video tools such as OpenAI's Sora: short runtime, slightly stilted or clipped speech patterns, and limited natural motion. It is unclear whether LeCun posted the clip believing it to be authentic or as irony; Meta spokespeople declined to comment and LeCun did not reply to a request for clarification.
The incident underscores two trends: first, consumer-facing video generators have matured enough to produce clips that can pass casual inspection; second, major AI firms and current U.S. policymakers have shown limited appetite so far for strict guardrails or regulation that would prevent misuse. That gap leaves verification and media literacy as the main defenses.
As one political figure put it, the existence of convincing fakes also creates a new tactic: deflection. Former President Donald Trump has suggested that when confronted with footage he dislikes, he might simply "blame AI": "If something happens really bad, just blame AI. But also they create things. It works both ways."
Short of quick technological fixes, the spread of AI-generated media calls for stronger newsroom verification practices, better public awareness, and clearer policies from platforms and regulators to preserve trust in visual journalism.
