CRBC News

Deepfake Offensive: AI-Generated Videos Portray 'Demoralised' Ukrainian Troops as Battle for Pokrovsk Rages

AI-generated videos have flooded social media amid intense fighting for Pokrovsk, claiming to show demoralised or retreating Ukrainian soldiers. Experts say the clips use emotional narratives and technical glitches to erode public confidence and boost pro-Russian messaging. Platforms removed some accounts, but millions of views and wide cross-border sharing highlight how synthetic media complicates verification and outpaces current moderation efforts.

Deepfake Offensive: AI-Generated Videos Portray 'Demoralised' Ukrainian Troops as Battle for Pokrovsk Rages

Heavy fighting for the logistics hub of Pokrovsk in eastern Ukraine continues, but alongside the battlefield struggle a parallel information war is unfolding online. Dozens of AI-generated videos, widely shared across social platforms in November, claim to show Ukrainian soldiers surrendering, crying or refusing to fight — footage that experts say is fake and designed to influence morale and public opinion.

What the videos show

The synthetic clips often depict soldiers in Ukrainian uniforms weeping, surrendering weapons, or describing a chaotic retreat. Some clips show obvious visual glitches — a soldier walking easily despite an apparent cast, a stretcher that seems to float, and body parts that fade in and out. Several videos carried the logo of Sora, a video-creation tool associated with OpenAI, while others appear to have reused faces of Russian streamers without consent.

"This fits a broader narrative we have seen since the start of the invasion — that President Zelensky is sending the young and the elderly to fight against their will," said Pablo Maristany de las Casas of the Institute for Strategic Dialogue.

How the disinformation works

Researchers say the clips exploit visual uncertainty and emotional storytelling to "chip away" at Ukrainian morale and to reinforce pro-Russian narratives. Carole Grimaud, a researcher at Aix-Marseille University, explained that manipulators "instrumentalise uncertainty to sow doubt in public opinion." Ian Garner of the Pilecki Institute described the tactic as "old propaganda with new technology": the imagery invites viewers to imagine a loved one in the same situation, magnifying its emotional impact.

Distribution and response

The manipulated videos spread across Instagram, Telegram, Facebook and X and appeared in posts in Greek, Romanian, Bulgarian, Czech, Polish and French, as well as on some regional outlets. TikTok removed accounts tied to some of the clips, but not before a few posts amassed hundreds of thousands of likes and millions of views. OpenAI has said it conducted an investigation into the misuse of its tools.

Analyses by the Institute for Strategic Dialogue and the European Digital Media Observatory show that AI-driven disinformation is increasing: one study found that nearly one-fifth of responses from a sample of chatbots cited sources attributed to Russian state media. The sheer scale and speed of synthetic content, experts warn, often outpace platform moderation and company responses.

Why it matters

While visual glitches remain a giveaway in many clips, generative tools are improving rapidly, making fakes harder to detect. Repeated exposure to fabricated material can shift public perceptions, the researchers caution, altering both domestic and international narratives about the conflict.

Observers urge media literacy, verification by independent fact-checkers, and stronger platform safeguards to limit the spread and impact of synthetic propaganda. As technology advances, so do the stakes of information warfare around active conflicts like the battle for Pokrovsk.

Similar Articles

Deepfake Offensive: AI-Generated Videos Portray 'Demoralised' Ukrainian Troops as Battle for Pokrovsk Rages - CRBC News