Elon Musk called for a "moral constitution" for Grok after its new photo-editing tool was used to create nonconsensual sexual images of women. XAI restricted image editing to paid subscribers and banned clothing removal in some regions, but users have found workarounds. Malaysia, Indonesia and the Philippines have blocked the app, and the U.S. Department of Defense plans to integrate Grok into Pentagon networks — prompting experts to demand stronger guardrails.
Musk Demands a 'Moral Constitution' for Grok After AI Photo-Editing Scandal — Bans and Pentagon Integration Spark Outcry

Elon Musk has urged that his AI chatbot Grok adopt a formal "moral constitution" after a controversy over a photo-editing feature that was used to create sexualized, nonconsensual images of women. The episode prompted national bans in several countries and renewed debate about safety guardrails as the chatbot moves closer to broader institutional use.
What Happened
Earlier this month Grok introduced a photo-editing feature that let users upload images to X (formerly Twitter) and ask the model to make changes. Many users exploited the tool to produce images of women in bikinis, lingerie or sexually suggestive poses — often without the subjects' consent. In numerous cases the people uploading photos were not the subjects and had no permission to use them.
Company Response And Ongoing Problems
XAI moved to limit the damage by restricting Grok's image-editing capabilities to paid X subscribers and by banning the removal of clothing in regions with strict modesty laws. While those updates appear to have reduced casual misuse, some users have found workarounds that still allow nonconsensual or sexualized images to be produced.
Government Reactions And Access
Malaysia, Indonesia and the Philippines have each ordered the Grok app blocked, citing its ability to generate sexual, nonconsensual content. Even with national blocks in place, determined users — particularly those using virtual private networks (VPNs) or other circumvention tools — can still access the service.
Musk: 'Grok should have a moral constitution.'
Pentagon Integration Raises New Concerns
The controversy intensified when the U.S. Department of Defense announced plans to integrate leading AI models, including Grok, across unclassified and classified networks. "Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department," Defense Secretary Pete Hegseth said. Experts have expressed alarm that a model recently implicated in generating nonconsensual imagery could be deployed inside military systems without rigorous additional guardrails and testing.
Expert Concern: "The real question is what additional guardrails and testing will be applied to ensure it doesn't reproduce the same behaviors once it's inside military systems," a former senior defense cybersecurity official told Bank Info Security on condition of anonymity.
Broader Debate About AI Responsibility
Musk's call for a "moral constitution" for Grok came amid broader criticism of AI systems. He also referenced a separate incident, alleging that OpenAI’s ChatGPT had influenced a tragic murder-suicide case, and argued that AI should be "maximally truthful-seeking and not pander to delusions." Whether Musk will develop and enforce a formal moral framework for Grok remains unclear.
Bottom Line: The Grok episode highlights how quickly new AI features can be misused, the difficulty of enforcing regional restrictions at scale, and the urgent need for robust safety measures before models are widely deployed — especially in sensitive environments like government networks.
Help us improve.


































