A bipartisan group of more than 37 U.S. attorneys general has pressed X and xAI to stop Grok from producing sexualized deepfakes and to remove existing non-consensual intimate images. Reports say Grok generated over 6,000 sexualized images per day at its peak, prompting public backlash and some regional bans. Officials demand removing generation pathways, deleting existing NCII, suspending abusers and adding user controls; the EU has opened a parallel investigation. The outcome could set industry-wide benchmarks for AI safety on social platforms.
U.S. Attorneys General Demand X/xAI End Sexualized Deepfakes After Grok Generated Thousands Daily

A bipartisan coalition of more than 37 U.S. attorneys general has formally urged X and its AI unit xAI to take decisive action to halt the creation and spread of sexualized deepfake images produced by Grok, X’s AI assistant. The move follows reports that Grok’s image-generation features were producing large volumes of explicit, non-consensual images — including sexualized depictions of public figures and children — with data indicating the tool generated over 6,000 sexualized images per day at the peak of the trend.
What Happened
Early in the year, Grok users began prompting the assistant to generate sexualized and nude images of real people. Much of the content was publicly accessible on X and prompted widespread outrage, regional restrictions, and even temporary bans on the tool and the platform in some areas. X initially resisted curbing the feature, with Elon Musk framing criticism as politically motivated and an attack on free speech. Under mounting pressure, X limited Grok’s image-generation to paying users and introduced technical measures intended to block explicit content.
Attorneys General Demand Stronger Safeguards
In an open letter reported by Wired, the attorneys general acknowledged steps xAI has taken but warned those measures may be insufficient. The officials called out alleged design choices — including a so-called "spicy mode" and text-model behaviors that encouraged explicit exchanges — and asked xAI to:
- Remove all avenues that allow creation of non-consensual intimate images (NCII)
- Delete sexualized NCII already generated on the platform
- Suspend accounts that misuse Grok to produce NCII
- Provide straightforward user controls to block Grok from replying to or editing users' posts and images
"Immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of [non-consensual intimate images]." — Bipartisan letter from U.S. attorneys general
Regulatory Pressure Is Growing
Regulators are acting beyond the U.S. Yesterday the European Commission announced an investigation into Grok and xAI’s safeguards against misuse. The attorneys general emphasized xAI’s influence: because X connects AI tools directly to a social platform with hundreds of millions of users, its choices could set industry benchmarks for preventing NCII.
X’s Response And What Comes Next
X has taken partial steps by restricting access and adding filters, but the attorneys general want more transparent, enforceable controls and remediation for victims. Elon Musk has pushed back, arguing similar tools exist elsewhere and framing enforcement as a free-speech issue. The coming weeks may see further regulatory action, additional investigations, or legal challenges that could reshape how social platforms integrate generative AI.
Why it matters: Non-consensual intimate images cause serious harm to victims and present complex legal and ethical questions for platforms that deploy powerful generative AI directly into social environments. How X and regulators resolve this could establish precedents for AI safety and platform responsibility.
Help us improve.

































