Hours after a deadly Minneapolis shooting allegedly involving an immigration agent, social media was flooded with AI-generated images falsely depicting both the victim and an officer. Dozens of posts — many created with Grok from xAI and amplified on X — claimed to "unmask" the agent or manipulated photos of the victim, Renee Nicole Good, age 37. Disinformation researchers say multiple false narratives reached millions of views, prompting warnings from experts about AI-fueled "hallucinated" content that distorts breaking news.
AI Deepfakes Flood Social Platforms After Minneapolis Shooting, Experts Warn of 'Hallucinated' News

Hours after a fatal shooting in Minneapolis allegedly involving an immigration agent, dozens of AI-generated images and altered photos began circulating across social platforms — many claiming to reveal the officer and misrepresenting the victim. The wave of fabrications, some sexually explicit or evocative of past police violence, underlines experts' concerns about readily available generative tools that create convincing but false visuals in the wake of breaking news.
Victim And Footage: The woman killed on Wednesday has been identified as 37-year-old Renee Nicole Good. Video footage reviewed by multiple outlets shows she was struck at point-blank range as she appeared to try to drive away while masked agents crowded around her Honda SUV. The authentic clip does not show any ICE agents with their masks off.
How The Fabrications Spread: Reporters at AFP found dozens of posts across social platforms — most prominently X (formerly Twitter) — in which users shared images created by artificial intelligence that claimed to "unmask" a U.S. Immigration and Customs Enforcement (ICE) agent. Many of the manipulated images were produced using Grok, an AI tool from Elon Musk's xAI, which has faced criticism for an "edit" feature that can generate explicit or altered imagery.
High-Profile Amplification: Claude Taylor, who leads the anti-Trump political action committee Mad Dog, posted a collection of AI images on X with the comment, "We need his name." That post drew more than 1.3 million views. Taylor later said he removed the post after learning the images were AI-generated, though the content remained accessible to some users.
Examples Of Manipulation: Some users used AI to digitally undress an older photo of Good and a more recent image of her body after the shooting, producing images that showed her in a bikini. Another woman wrongly identified as Good was subjected to grotesque manipulations — including one altered image that portrayed her as a fountain and another that placed her neck under a masked agent's knee, an image resonant of the 2020 killing of George Floyd in Minneapolis.
What Platforms And Developers Said: There was no immediate public comment from X. When AFP contacted xAI, the company replied with an automated message reading, "Legacy Media Lies." Disinformation tracker NewsGuard identified four distinct AI-generated falsehoods about the shooting that together amassed roughly 4.24 million views across X, Instagram, Threads, and TikTok.
"Given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to 'add to the story' of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways," said Walter Scheirer of the University of Notre Dame. "A new development has been the use of AI to 'fill in the blanks' of a story, for instance, the use of AI to 'reveal' the face of the ICE officer. This is hallucinated information."
Experts' Concerns: Researchers say the rapid creation and distribution of fabricated visuals creates alternate realities around news events and accelerates misinformation. Hany Farid, co-founder of GetReal Security and a professor at UC Berkeley, called these distortions "problematic" and said they add to the "growing pollution of our information ecosystem." Experts also warn that AI tools are increasingly used to dehumanize victims and to inflame partisan narratives.
Takeaway: The incident highlights how quickly generative AI can be used to produce convincing but false content after breaking events. Fact-checking, video verification, and cautious platform moderation are becoming ever more critical to prevent harm, misidentification, and the spread of fabricated stories in the public sphere.
Help us improve.






























