Brussels vs. Grok: When "Move Fast and Break Things" Meets "Move Slow and Fine Everything"
The EU opens yet another investigation into Elon Musk's X. This time, the AI really did cross a line.

There is a particular rhythm to the relationship between Elon Musk and the European Commission. Musk builds something that violates every conceivable European regulation. The Commission sends a letter. Musk posts something inflammatory on X. The Commission opens an investigation. Musk claims persecution. The Commission issues a fine. Musk appeals. Repeat ad infinitum.
Today, we have reached a new movement in this regulatory symphony. The European Commission has opened a formal investigation into X over its AI chatbot Grok, which has been generating sexually explicit deepfake images of women and children. Yes, you read that correctly. An artificial intelligence system, deployed to hundreds of millions of users, was programmed in such a way that users could type "remove her clothes" and receive exactly what they asked for.
The Commission's statement was, by Brussels standards, practically incandescent with rage. Executive Vice-President Henna Virkkunen called the deepfakes "a violent, unacceptable form of degradation" and questioned whether X had treated "the rights of European citizens - including those of women and children - as collateral damage of its service." Commission President Ursula von der Leyen added that Europe would not "tolerate unthinkable behaviour, such as digital undressing of women and children."
"We will not hand over consent and child protection to tech companies to violate and monetise. The harm caused by illegal images is very real."
One might think that preventing an AI from generating child sexual abuse material would be among the first items on a product safety checklist. Apparently not. According to the Center for Countering Digital Hate, Grok generated an estimated three million sexualized images of women and children over an 11-day period - an average rate of 190 per minute. Three million. In days. This is not a bug that slipped through quality assurance. This is a fundamental architectural decision that prioritized engagement over ethics, virality over decency.
The Dutch Connection
The investigation is being conducted in cooperation with Coimisiún na Meán, Ireland's Digital Services Coordinator, because X's European headquarters is located in Dublin. But the implications stretch across the entire EU. Dutch regulators have been particularly vocal about platform accountability, and the Netherlands' digital rights organizations have been among the first to document Grok's capabilities.
For a country that recently witnessed its own AI controversies - from the childcare benefits scandal's algorithmic discrimination to ongoing debates about facial recognition in public spaces - the Grok affair hits close to home. The Dutch Data Protection Authority has long argued that AI systems must be designed with fundamental rights as a baseline, not as an afterthought. Grok's behavior suggests that some companies still view such rights as optional features to be geoblocked in "jurisdictions where such content is illegal."
The Regulatory Landscape
This investigation joins a growing pile on Musk's desk:
- December 2025: €120 million fine for DSA transparency violations
- December 2023: Ongoing investigation into X's recommender system
- January 2026: UK Ofcom investigation under Online Safety Act
- January 2026: Investigations opened in Australia, France, Germany
- January 2026: Temporary bans in Malaysia and Indonesia
The "Zero Tolerance" That Tolerates Everything
X's response has been characteristically inadequate. The company issued a statement claiming it has "zero tolerance" for child sexual exploitation and nonconsensual nudity. It then announced it would stop allowing users to depict people in "bikinis, underwear or other revealing attire" - but only in places where such content is illegal. Elsewhere, presumably, digital undressing remains a feature, not a bug.
This is the logic of a company that has decided regulatory compliance is a geolocation problem rather than an ethical one. If German law prohibits something, block it in Germany. If Dutch law prohibits something, block it in the Netherlands. If no law explicitly prohibits it, why not allow it? The fact that the content in question involves the sexual exploitation of real people - including children - appears to be a secondary consideration.
The US response has been predictably defensive. Under Secretary of State Sarah Rogers told CNBC that "deepfakes are a troubling, frontier issue that call for tailored, thoughtful responses. Erecting a 'Great Firewall' to ban X, or lobotomizing AI, is neither tailored nor thoughtful." Vice President JD Vance, shortly before the previous €120 million fine was announced, had posted that "the EU should be supporting free speech not attacking American companies over garbage."
One wonders what definition of "free speech" encompasses the right to digitally undress children. Perhaps it's in a constitutional amendment the rest of us missed.
The Broader Pattern
What makes the Grok scandal particularly illuminating is what it reveals about the current state of AI governance - or rather, its absence. We are living through a period in which companies can deploy AI systems to hundreds of millions of users without conducting basic safety assessments. The Commission noted that X appears to have provided no risk assessment whatsoever for Grok's image generation capabilities.
This is not regulatory overreach. This is the bare minimum. The Digital Services Act requires platforms to identify and mitigate systemic risks. Generating millions of non-consensual sexual images is a systemic risk that materialized spectacularly.
If the Commission finds that X violated the DSA, the platform could face fines of up to six percent of its global annual revenue. Given X's current financial state - advertising revenue has reportedly collapsed since Musk's acquisition - such a penalty could be existentially threatening. But fines alone are insufficient. What's needed is a fundamental shift in how AI companies approach deployment: safety by design, not safety by geolocation.
The European Choice
The Grok investigation is ultimately about what kind of digital environment Europe wants to create. The American model, exemplified by Musk's X, treats user safety as a constraint to be minimized and regulation as an attack on innovation. The European model, imperfect as it is, starts from the premise that technology should serve human dignity rather than undermine it.
Neither model is perfect. But when the alternative is an AI that generates child sexual abuse material on demand, the choice becomes rather clear. Brussels may move slowly. But at least it's trying to ensure that "moving fast and breaking things" doesn't include breaking children.
Share this article
Mr. Squorum
Political Analyst
Political analyst specializing in Dutch-EU relations and European affairs.
Related Articles
German-Turkish Director Ilker Catak Wins Golden Bear at Berlinale for Political Drama Yellow Letters
The 76th Berlin Film Festival concludes with politically charged ceremony as Yellow Letters takes top prize and Sandra Huller wins Best Performance for Rose.
4 min readSlovakia Threatens to Cut Electricity Supply to Ukraine as Fico Escalates Energy Dispute
Slovak Prime Minister announces plans to halt electricity transmission to Ukraine from Monday, escalating tensions with Kyiv over energy transit issues.
4 min readComments (0)
Loading comments...