CRBC News
Security

Militant Groups Are Weaponizing AI: Deepfakes, Recruitment and Growing Security Risks

Militant Groups Are Weaponizing AI: Deepfakes, Recruitment and Growing Security Risks
FILE - Demonstrators chant pro-Islamic State slogans as they carry the group's flags in front of the provincial government headquarters in Mosul, 225 miles (360 kilometers) northwest of Baghdad on June 16, 2014. (AP Photo, File)

Extremist organizations are increasingly experimenting with generative AI to scale recruitment, produce deepfakes and automate cyberattacks, U.S. officials warn. Islamic State and affiliated networks have used AI-generated images, audio and rapid translations to spread propaganda, including after an attack that killed nearly 140 people. Experts say sophisticated uses remain largely aspirational compared with state actors, but the threat is growing as AI becomes cheaper and more powerful. Lawmakers are advancing measures to improve information sharing and require annual AI-risk assessments for homeland security.

WASHINGTON (AP) — As governments, companies and researchers race to harness artificial intelligence, militant and extremist groups are quietly experimenting with the same tools — even when they do not yet have a clear playbook for how to use them.

How Extremists Are Using AI Today

National security officials and intelligence analysts warn that generative AI can accelerate recruitment, produce convincing deepfakes, automate translations and sharpen cyberattacks. A recent post on a pro-Islamic State forum urged supporters to "make AI part of their operations," adding that "one of the best things about AI is how easy it is to use" and urging followers to "make their nightmares into reality."

Islamic State (IS) — which once controlled territory in Iraq and Syria but now operates as a decentralized network of affiliated militants — has long used social media for recruitment and disinformation. Researchers say the group has experimented with AI-generated images and videos, deepfake audio of leaders, and rapid multilingual translations to extend its reach.

Examples And Effects

Extremist pages circulated doctored images from the Israel–Hamas war that depicted bloodied, abandoned children in ruined buildings; those images inflamed outrage, deepened polarization and were used for recruitment. After an attack claimed by an IS affiliate that killed nearly 140 people at a Russian concert venue last year, AI-crafted propaganda videos proliferated across discussion boards and social platforms.

SITE Intelligence Group, which monitors extremist activity, has documented IS-generated deepfake audio and automated translations, while cybersecurity experts report that hackers already use synthetic audio and video in phishing campaigns to impersonate senior officials and breach networks.

Risks Beyond Propaganda

Analysts caution that while many sophisticated applications remain "aspirational" for extremist groups compared with state actors such as China, Russia or Iran, the risks will grow as powerful, inexpensive AI tools become more widely available. John Laliberte, a former National Security Agency vulnerability researcher and now CEO of ClearVector, noted:

"With AI, even a small group that doesn't have a lot of money is still able to make an impact."

Officials are also concerned about AI lowering technical barriers for cyberattacks and, alarmingly, enabling attempts to develop biological or chemical capabilities — a risk highlighted in the Department of Homeland Security's updated Homeland Threat Assessment.

Policy Responses And What Comes Next

U.S. lawmakers and security officials are proposing steps to counter misuse. Sen. Mark Warner (D-Va.), ranking member of the Senate Intelligence Committee, has argued for clearer mechanisms that let AI developers share intelligence about misuse by extremists, criminal hackers and foreign operatives. At a recent congressional hearing, members were told that IS and al-Qaida have even hosted workshops to teach supporters how to apply AI tools.

Legislation passed by the U.S. House would require annual homeland security assessments of AI-related risks posed by extremist groups. Rep. August Pfluger (R-Texas), sponsor of the bill, said defenses must evolve alongside threats:

"Our policies and capabilities must keep pace with the threats of tomorrow."

While state actors remain ahead in technical sophistication, the combination of widely available generative tools and social platforms makes it urgent for governments, tech companies and civil society to coordinate detection, information-sharing and mitigation strategies.

Related Articles

Trending

Militant Groups Are Weaponizing AI: Deepfakes, Recruitment and Growing Security Risks - CRBC News