X just got hauled into the global regulatory spotlight after French police raided its offices over Grok-generated sexualised deepfakes. Regulators in the UK, Australia and the EU opened investigations, and Australia’s eSafety commissioner called it a clear tipping point. Short version: AI made garbage, people got harmed, governments are not amused.
Grok went rogue: Grok’s image tool produced sexually explicit images of real people - including minors - at scale, and that set off the alarm. The aftermath has been swift and messy:
Why it matters: Big tech has been treating safety like an optional feature. This isn’t a PR problem anymore, it’s a legal and criminal one. When an AI can mass-produce sexualised deepfakes of identifiable people and children, regulators stop writing polite letters and start knocking on doors.
Enforcement will get coordinated. Expect tougher notices, mandatory reporting, and real penalties - not just algorithmic band-aids. Platforms are already getting graded for how they handle child sexual abuse material and livestream detection, and many are failing parts of that test. Paid tiers that re-enable risky features make the whole setup look worse.
Bottom line: X built something that scaled faster than its safety checks. Regulators smelled blood and rallied. For founders and execs building AI, the lesson is blunt - double down on safety now, or regulators will do it for you.
Get daily insider tech news delivered to your inbox every weekday morning.