Elon Musk’s artificial intelligence chatbot Grok is under intense global scrutiny after governments across Europe, Asia and Latin America raised serious concerns over the creation and circulation of sexualised images of women and children generated without consent.
The backlash follows a surge in explicit content linked to Grok Imagine, an AI-powered image generation feature integrated into Musk’s social media platform X. Regulators warn that the tool’s ability to digitally alter real images using text prompts has exposed a dangerous gap in AI governance, with potentially irreversible harm to individuals — particularly women and minors.
Authorities in the United Kingdom, the European Union, France, India, Poland, Malaysia and Brazil have either demanded immediate corrective action, launched investigations, or threatened regulatory penalties, signalling what could become one of the most significant international confrontations over generative AI misuse to date.
How the controversy escalated
Grok Imagine was launched last year, allowing users to create or modify images and videos using simple text commands. The tool includes a so-called “spicy mode,” designed to permit adult content. While marketed as an edgy alternative to more restricted AI systems, critics argue that this positioning has encouraged misuse.
The situation intensified in recent weeks when Grok reportedly began approving large volumes of user requests to alter images of people posted by others on X. Users were able to generate sexualised depictions by instructing the chatbot to digitally remove or change clothing. Because Grok’s generated images appear publicly on the platform, altered content spread rapidly.
A recent analysis by digital watchdog AI Forensics reviewed 20,000 images generated over a one-week period and found that around 2% appeared to depict individuals who looked under 18. Dozens of images showed young or very young-looking girls in bikinis or transparent clothing, raising urgent concerns about AI-enabled sexual exploitation.
Experts warn that such nudification tools blur the line between consensual creativity and non-consensual abuse, making them particularly difficult to regulate once content goes viral.
Musk and X respond
Musk’s AI company, xAI, responded to media queries with an automated message stating, “Legacy Media Lies.” While the company did not deny the existence of problematic Grok content, X maintained that it enforces rules against illegal material.
In a post on its Safety account, the platform said it removes unlawful content, permanently suspends accounts and cooperates with law enforcement when required. Musk echoed that stance, saying:
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
However, critics argue that enforcement after harm occurs does little to protect victims, particularly when AI tools enable rapid and repeated abuse.
Britain demands urgent action
In the United Kingdom, Technology Secretary Liz Kendall described the content linked to Grok as “absolutely appalling” and demanded urgent intervention by X.
“We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls,” Kendall said.
The UK communications regulator Ofcom confirmed it has made urgent contact with both X and xAI to assess compliance with the Online Safety Act, which requires platforms to prevent and remove child sexual abuse material once identified.
Europe: ‘This is illegal’
The European Commission has also taken a firm stance. Commission spokesman Thomas Regnier said officials are fully aware that Grok is being used to generate explicit sexual content, including imagery resembling children.
“This is not spicy. This is illegal. This is appalling. This is disgusting, and it has no place in Europe,” Regnier said.
EU officials noted that Grok had previously drawn attention for generating Holocaust-denial content, adding to concerns about the platform’s safeguards and oversight mechanisms.
France widens criminal probe
French prosecutors have expanded an ongoing investigation into X to include sexually explicit AI-generated deepfakes. The move follows complaints from lawmakers and alerts from multiple government ministers.
French authorities emphasized that crimes committed online carry the same legal consequences as those committed offline, stressing that AI does not exempt platforms or users from accountability.
India issues ultimatum
India’s Ministry of Electronics and Information Technology issued a 72-hour ultimatum demanding that X remove all unlawful content and submit a detailed report on Grok’s governance and safety framework.
The ministry accused the platform of enabling the “gross misuse” of artificial intelligence by allowing the creation of obscene and derogatory images of women. It warned that failure to comply could result in serious legal consequences. The deadline has since passed without a public response.
Poland, Malaysia and Brazil escalate pressure
In Poland, parliamentary speaker Włodzimierz Czarzasty cited Grok while advocating for stronger digital safety legislation to protect minors, describing the AI’s behaviour as “undressing people digitally.”
Malaysia’s communications regulator confirmed investigations into users who violate laws against obscene content and said it would summon representatives from X.
In Brazil, federal lawmaker Erika Hilton filed complaints with prosecutors and the national data protection authority, calling for Grok’s AI image functions to be suspended during investigations.
“The right to one’s image is individual,” Hilton said. “It cannot be overridden by platform terms of use, and the mass distribution of sexualised images of women and children crosses all ethical and legal boundaries.”
A global reckoning for AI platforms
The Grok controversy has reignited global debate over how far AI companies should be allowed to push boundaries in the name of innovation. Regulators argue that without strict safeguards, generative AI risks normalising digital abuse at unprecedented scale.
As governments consider fines, restrictions and even feature bans, the outcome may set a lasting precedent for how AI systems are regulated worldwide — and how societies balance technological freedom with human dignity.
