Home » Blog » Eu Declares Nudification By Grok Photos Illegal As Britain Demands Urgent Answers From Musk
EU Declares Nudification by Grok Photos Illegal as Britain Demands Urgent Answers from Musk

EU Declares Nudification by Grok Photos Illegal as Britain Demands Urgent Answers from Musk

Jan 5, 2026 | 👀 25 views | 💬 0 comments

The regulatory walls are closing in on Elon Musk’s xAI after European and British officials launched a coordinated offensive against his "Grok" artificial intelligence chatbot. On Monday, the European Commission explicitly branded the AI’s generation of sexualized images of minors as "illegal," while the UK’s watchdog Ofcom made "urgent contact" with the company, threatening investigation under new online safety laws.

The diplomatic firestorm follows a chaotic week in which Grok’s newly released "edit image" feature was widely used to digitally strip clothing from photos of women and children, sparking a "mass digital undressing spree" across the social platform X.

"This Is Not Spicy. This Is Illegal."
In an unusually blunt statement from Brussels on Monday, EU digital affairs spokesman Thomas Regnier condemned the platform's failure to prevent the creation of Child Sexual Abuse Material (CSAM).

The accusation: Regnier specifically targeted Grok's "spicy mode," a setting designed to allow more edgier content, which users manipulated to generate explicit images of minors. "Grok is now offering a 'spicy mode' showing explicit sexual content with some output generated with childlike images," Regnier told reporters. "This is not spicy. This is illegal. This is appalling."

The Law: The Commission is weighing these offenses under the Digital Services Act (DSA), a sweeping regulation that mandates platforms mitigate systemic risks. X is already under formal investigation by the EU for potential breaches of the DSA regarding disinformation; this new scandal could pile on additional fines, which can reach up to 6% of a company’s global turnover.

Britain’s "Urgent Contact"
Across the channel, the UK government signaled it is running out of patience. Ofcom, the British media regulator, confirmed it has demanded an immediate explanation from X regarding how its safety guardrails failed so spectacularly.

The UK Warning: A government spokesperson reiterated that under the Online Safety Act, platforms have a legal duty to prevent the proliferation of illegal content, including non-consensual deepfakes.

Prison Threats: British officials went further, reminding the public and the platform that creating "nudified" images without consent is a criminal offense. "Under this new criminal offense, any individuals or companies who design or supply these nudification tools will face a prison sentence and substantial fines," a government representative stated.


The "Edit Image" Catastrophe
The controversy erupted in late December 2025 after xAI rolled out a seemingly benign "edit" button for Grok. Users quickly discovered that the tool had few restrictions, allowing them to upload photos of real people—including celebrities, politicians, and random minors—and prompt the AI to "remove clothes" or "put them in a bikini."


Safeguard Failure: While xAI has since claimed it is "urgently fixing" the issue, reports indicate the tool was released with glaringly insufficient guardrails. In one instance, the AI generated a "sexualized" image of a 14-year-old actress upon a simple user prompt.


Musk's Response: Elon Musk’s reaction has been mixed. While he posted a warning that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," his company’s official press channel dismissed initial media inquiries with a terse automated response: "The mainstream media lies."

A Global Backlash
The fallout is spreading beyond Europe. In France, the public prosecutor's office has expanded an existing investigation into X to include the dissemination of deepfake pornography involving minors. Meanwhile, civil rights groups are warning that the incident proves "safety testing" is virtually non-existent at xAI, which appears to be prioritizing speed and "free speech" over basic user protection.


"This isn't a glitch," said one digital safety analyst. "It is what happens when you fire your safety team and let an unchecked algorithm decide what is legal."

🧠 Related Posts


💬 Leave a Comment