AI-generated deepfake videos claiming to show harsh ICE raids have gone viral on Facebook despite clear visual errors like misspellings and unnatural movement. An account named "USA Journey 897" posted staged clips — one Walmart video alone drew about 4.1M views — that experts say were designed to provoke emotion and boost engagement. Researchers warn that inexpensive AI tools make politically charged fakes easier to produce and that such content could influence less tech-savvy voters and election discourse. Verify sensational clips with trusted sources and look for AI artifacts before sharing.
Viral 'ICE Raid' Clips on Facebook Are AI Deepfakes — The Bigger Problem Is How Easily People Believe Them
AI-generated deepfake videos claiming to show harsh ICE raids have gone viral on Facebook despite clear visual errors like misspellings and unnatural movement. An account named "USA Journey 897" posted staged clips — one Walmart video alone drew about 4.1M views — that experts say were designed to provoke emotion and boost engagement. Researchers warn that inexpensive AI tools make politically charged fakes easier to produce and that such content could influence less tech-savvy voters and election discourse. Verify sensational clips with trusted sources and look for AI artifacts before sharing.

AI-generated 'ICE raid' videos are spreading on Facebook — and people are falling for them
Summary: Several staged, AI-generated videos purporting to show brutal Immigration and Customs Enforcement (ICE) raids have gone viral on Facebook, drawing millions of views and thousands of interactions. The clips contain obvious AI artifacts — misspellings, unnatural movement and visual glitches — but their emotional content drives engagement and misleads many viewers.
As404 Media reported that an account calling itself "USA Journey 897" posted multiple deepfake clips on the Meta-owned platform. One of the most-shared videos claims to show Latino Walmart employees herded into a van labeled with the misspelled phrase "IMMIGRATION AND CERS". Observers noted odd details, including an officer with a badge reading "POICE" and a worker who moves in an unnatural, shuffling gait.
Other posts by the same account stage similar scenes: McDonald’s employees allegedly detained during raids and food-delivery couriers intercepted on city streets. Although these videos are clearly fabricated — rife with telltale AI artifacts — many viewers nonetheless treated them as authentic.
"It’s about time they go to Walmart," one commenter wrote on the Walmart clip, which has amassed roughly 4.1 million views, 16,700 likes, 1,900 comments and 2,200 shares on Facebook.
Other AI-generated clips are even more cinematic, including one that shows a young woman screaming as she is deported and separated from her baby. That video drew a mix of condemnations and applause; some viewers praised the deportations, while others expressed horror and disbelief.
Why these fakes spread so easily
Experts say deepfakes are designed to provoke intense emotions, which in turn fuels engagement on social platforms. Sherry Pagoto, psychologist and director of the UConn Center for mHealth and Social Media, explains that when fear or anger takes over, "our ability to scrutinize the veracity of the content is weakened." Creators who monetize clicks know how to craft content that triggers those responses.
Jen Golbeck, a professor at the University of Maryland who studies social media and AI, notes that many deepfakes still reveal obvious glitches — awkward hands, misspellings, mismatched audio or an uncanny-valley effect — but casual viewers scrolling quickly are unlikely to inspect footage closely. "If you are giving something half your attention... if AI is rendered well enough, it could look real," she said.
Cayce Myers, a communications professor at Virginia Tech, warned that inexpensive AI tools make it easier for bad actors to produce politically resonant fakes. That dynamic poses a clear threat to elections and public discourse, especially among lower-information or less tech-savvy voters who may accept content that confirms their biases.
Political misuse and recent examples
State and foreign actors have previously run disinformation campaigns around U.S. elections. Now political operatives are experimenting with deepfakes as well. Before last month’s election, Senate Republicans posted a 30-second clip on X and YouTube that combined Sen. Chuck Schumer’s real words with AI-generated imagery — the video even included a small transparent watermark and the label "AI generated," details a fast-scrolling user might miss.
Last week, racist AI videos falsely alleging that Black SNAP recipients had their EBT cards declined or were selling benefits also circulated widely, showing how rapidly inflammatory falsehoods can spread.
How to spot and respond to deepfakes
Look for common signs of AI manipulation: misspelled text, unnatural movement, inconsistent lighting or shadows, odd facial expressions or audio that does not sync. Verify footage with reputable news sources before sharing, check the poster’s account history, and be skeptical of clips that seem designed to provoke extreme emotional reactions.
Platforms, researchers and regulators will need better detection, labeling and moderation tools to combat this trend, but users also play a critical role. Slowing down, questioning sensational content, and relying on trusted fact-checks can reduce the reach of harmful deepfakes.
Source: As404 Media and expert interviews summarized from reporting.
