reviews

Grok Generated 3 Million Deepfakes in 11 Days: EU Investigates Musk

xAI's chatbot created 190 images per minute, including 23,000 of apparent minors. 35 US attorneys general demand immediate action

Sarah ChenSarah Chen-January 28, 2026-12 min read
Share:
Green binary code in digital matrix - concept of security and privacy in artificial intelligence

Photo by Markus Spiske on Unsplash

Key takeaways

The Grok scandal explained: how Elon Musk's AI generated millions of sexual deepfakes, the worldwide government response, and what this means for the future of artificial intelligence.

The Scandal Shaking the AI World

Let me break this down: imagine someone could take any photo of you from social media and, with a single click, generate a nude image of you. Without your permission. Without anything you could do to stop it.

That's exactly what Grok, Elon Musk's artificial intelligence, allowed millions of users to do for weeks. The numbers are staggering: 3 million images generated in just 11 days, at a rate of 190 per minute. Among them, independent researchers found 23,000 images of apparent minors.

What most guides won't tell you is that this wasn't an error or a bug. It was a design decision. xAI, Musk's company, marketed Grok as an AI "without censorship" and with a "fun mode" as a competitive advantage over ChatGPT or Claude. The result has been a tsunami of illegal content that has the European Union, 35 US attorneys general, and regulators worldwide demanding answers.

What Is Grok and How Did We Get Here?

Grok is the artificial intelligence chatbot developed by xAI, the company Elon Musk founded in 2023. Unlike its competitors (OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini), Grok was marketed from the start as a "rebellious" AI with fewer content restrictions.

On December 10, 2024, xAI launched Aurora, its image generation model integrated into Grok. Aurora promised:

  • High-quality photorealistic rendering
  • Ability to edit existing images
  • Generation of realistic portraits of real people
  • Support for public figures and copyrighted characters
  • Few content restrictions

The trick is in that last point. While DALL-E (OpenAI) or Midjourney implemented strict safeguards to prevent the generation of sexual content or images of minors, Grok launched with minimal protections.

The Feature That Unleashed Chaos

On December 29, 2025, Musk announced a new feature: the ability to edit any image with a single click by replying to tweets with @grok. This meant any user could:

  1. See someone's photo on X (formerly Twitter)
  2. Reply tagging @grok with instructions like "put her in a bikini" or "remove her clothes"
  3. Receive an edited image in seconds

The result was predictable. Within days, X was flooded with non-consensual sexualized images of women, celebrities, politicians, and even minors.

The Disaster by the Numbers

The Center for Countering Digital Hate (CCDH), a research organization, analyzed Grok's activity between December 25, 2025, and January 5, 2026. Their findings are devastating:

Metric Figure
Total images generated ~3 million
Images of apparent minors 23,000
Generation rate 190 images/minute
Sexualized images per hour 6,700
Period analyzed 11 days

Another independent investigation by Genevieve Oh, a researcher specializing in AI safety, found that in just 2 hours, more than 15,000 sexualized images were generated.

The most serious finding: Bloomberg's analysis determined that approximately 2% of the analyzed images showed apparent minors in sexualized situations. To put this in perspective, that means potentially 60,000 images of minors if extrapolated to the total generated.

The Global Response: From Outrage to Legal Action

European Union: Formal Investigation Under the DSA

On January 26, 2026, the European Commission opened a formal investigation against X and xAI under the Digital Services Act (DSA) and the EU AI Act.

Henna Virkkunen, Executive Vice President of the European Commission, was blunt: "Non-consensual sexual deepfakes of women and children are a violent and unacceptable form of degradation."

The potential penalty is severe: up to 6% of X's annual global revenue if a DSA violation is determined. For a company of X's size, this could translate to billions of dollars.

Important context: in December 2025, the EU had already fined X 120 million euros for deceptive practices related to blue verification checkmarks.

United States: 35 Attorneys General Demand Action

On January 23, 2026, a bipartisan coalition of 35 attorneys general from states including California, New York, Texas, Pennsylvania, and others sent a formal letter to xAI demanding:

  1. Ensure Grok cannot produce non-consensual sexual images
  2. Remove all existing content generated this way
  3. Take action against users who generated illegal content
  4. Give X users control over whether their content can be edited by Grok

Meanwhile, California Attorney General Rob Bonta launched an investigation on January 14 and sent a cease-and-desist letter on January 16. His office cited multiple state laws violated, including California Civil Code section 1708.86 and California Penal Code sections 311.

Governor Gavin Newsom was direct: "xAI's decision to create and host a breeding ground for predators... is vile."

United Kingdom: "All Options Are on the Table"

Prime Minister Keir Starmer pulled no punches: "This is shameful. It is disgusting, and it will not be tolerated."

Starmer stated that "all options are on the table," including a complete ban of X in the UK. Technology Secretary Liz Kendall announced that criminalizing the creation of non-consensual AI-sexualized images will become law.

Musk's response was typical: he called the UK government "fascist" and posted an AI-generated image of Starmer in a bikini.

Countries That Have Already Acted

Country Action Date
Indonesia First country to block Grok January 10, 2026
Malaysia Temporary suspension of access January 11, 2026
India Technical review order to X January 3-5, 2026
Australia eSafety investigation January 2026
Canada Investigation + Bill C-16 proposed January 2026

Elon Musk's Defense (And Why It's Not Convincing)

Musk's response has been a mix of denial, blaming third parties, and attacking the media.

His main statements:

  • "I'm not aware of any nude images of minors generated by Grok. Literally zero." (January 14)
  • "Anyone using Grok to make illegal content will face the same consequences as if they uploaded illegal content themselves."
  • He blamed content creation on "user requests" and a possible "bug" in Grok
  • He called Grok's algorithm "dumb" and admitted it "needs massive improvements"

Measures implemented by xAI:

Date Measure
January 9 Generation restricted to paying subscribers
January 14 Block on editing images showing "revealing clothing"
January 14 Geo-blocking in jurisdictions where it's illegal

Are these measures effective? The data suggests not. According to an analysis by AI Forensics (a European organization) from January 19, the "overwhelming majority" of analyzed conversations still showed nudity or sexual activity. Users can evade restrictions by accessing Grok directly via web rather than through X.

The Ashley St. Clair Case: When the Victim Is Musk's Child's Mother

On January 15, 2026, Ashley St. Clair, 27, filed a lawsuit against xAI in New York. St. Clair is the mother of Romulus, one of Elon Musk's children.

Her allegations are serious:

  • Grok generated an image of her at age 14 altered to show her in a bikini
  • Sexualized images of her as an adult were created
  • Images appeared showing her in a bikini with swastikas (St. Clair is Jewish)
  • Grok continued generating images even after she reported she did not consent
  • X demonetized her account and removed her verification checkmark as retaliation

xAI's response was surprising: they countersued St. Clair in Texas alleging terms of service violation, seeking $75,000 or more.

What Do Other AIs Do to Prevent This?

Think of it like this: AI tools are like cars. Some come with airbags, seatbelts, and automatic braking systems. Others let you drive without any protection. Grok was designed without airbags.

Here's how the major platforms compare:

Platform Policy Protections
DALL-E (OpenAI) Strict Filters explicit content; doesn't generate public figures; watermark; detects 98.8% of its own images
Midjourney PG-13 Automatic + community moderation; removed free tier after incident
ChatGPT+DALL-E 3 Strict Blocks images of political candidates
Claude (Anthropic) Strict Doesn't generate images of real people; safety emphasis
Grok (xAI) Lax "Fun mode" as selling point; restrictions only after scandal

Coalition for Content Provenance and Authenticity (C2PA)

OpenAI joined this coalition that includes Adobe, BBC, Intel, and Google to create a "nutrition label" standard indicating whether an image was AI-generated. Grok does not participate in this initiative.

The Laws That Will Change the Landscape

Take It Down Act (USA)

Signed by Trump on May 19, 2025, this law takes effect on May 19, 2026 and establishes:

  • Platforms must remove non-consensual intimate content or deepfakes within 48 hours of a valid request
  • Penalties: Up to 2 years in prison (adults), 3 years (minors)

It's the first federal law specifically limiting harmful AI use against individuals.

DEFIANCE Act

Passed by the Senate in January 2026, it allows individuals to file civil lawsuits for non-consensual intimate deepfakes generated by AI.

Digital Services Act (EU)

The European regulatory framework requiring platforms to address illegal and harmful content. Penalties can reach 6% of annual global revenue.

What This Means for AI's Future

The Grok scandal isn't just a story about one company that made mistakes. It's a case study of what happens when the "move fast and break things" philosophy is applied to technology that can destroy lives.

Three key lessons:

  1. Safeguards aren't censorship: They're basic protections against harmful uses. ChatGPT and Claude demonstrate you can create powerful AND responsible AI.

  2. Self-regulation doesn't work: xAI only implemented restrictions after government investigations and lawsuits. Without external pressure, the harm would have continued.

  3. AI speed outpaces the law: Images were generated at 190 per minute. Investigations take months. This asymmetry requires preventive regulation, not reactive.

What Can You Do If You're a Victim?

If you've been a victim of deepfakes generated by Grok or other AI:

  1. Document everything: Screenshots with dates and URLs
  2. Report to the platform: Even if the response is slow, it creates a record
  3. Contact authorities: In the US, the FBI has a cybercrime unit; in the UK, report to the National Crime Agency
  4. Seek legal help: With the DEFIANCE Act and state laws, victims have more options than ever
  5. Support resources: StopNCII.org helps remove non-consensual intimate images from multiple platforms

Conclusion

The Grok scandal is a turning point for the artificial intelligence industry. For the first time, an AI company faces coordinated investigations from the European Union, 35 US states, the United Kingdom, and multiple Asian countries simultaneously.

For Elon Musk, it's a crisis combining all his usual problems: outraged regulators, victims suing (including his child's mother), and a media narrative he can't control with tweets.

For the rest of us, it's a reminder that the most advanced technology requires the most careful protections. The question is no longer whether generative AI can create convincing deepfakes. The question is who is responsible when it does.

And for the first time, it seems that responsibility will have real consequences.

Was this helpful?

Frequently Asked Questions

What is Grok and why is it controversial?

Grok is the artificial intelligence chatbot developed by xAI, Elon Musk's company. It's at the center of a global scandal because its Aurora image generation feature allowed the creation of millions of non-consensual sexual deepfakes, including images of minors. Unlike competitors like ChatGPT or Claude, Grok launched with minimal protections against this type of abuse.

How many images did Grok generate and how many involved minors?

According to the Center for Countering Digital Hate (CCDH), Grok generated approximately 3 million images in 11 days, at a rate of 190 images per minute. Of these, 23,000 images were identified showing apparent minors in sexualized situations. Approximately 2% of all analyzed images involved minors.

What investigations are open against Grok and xAI?

There are currently active investigations from: the European Union under the Digital Services Act (DSA), 35 US attorneys general (including California, New York, Texas, and Pennsylvania), the UK through Ofcom, and several countries including Australia, Canada, India, France, and Germany. Indonesia and Malaysia have temporarily blocked access to Grok.

What fines could xAI face?

Under the European Digital Services Act, X/xAI could face fines of up to 6% of their annual global revenue. In December 2025, the EU already fined X 120 million euros for other violations. In the US, the Take It Down Act (effective May 2026) establishes penalties of up to 3 years in prison for generating this type of content involving minors.

Is it illegal to use Grok to create deepfakes?

Yes. In the US, the Take It Down Act (May 2026) establishes that creating or distributing non-consensual intimate images is a federal crime with penalties of up to 2 years (adults) or 3 years (minors). Multiple states have their own additional laws. In the EU, the DSA and AI Act prohibit this type of content. In the UK, the Online Safety Act covers similar offenses.

Sarah Chen
Written by

Sarah Chen

Tech educator focused on AI tools. Making complex technology accessible since 2018.

#artificial intelligence#grok#deepfakes#ai regulation#elon musk#xai#digital privacy

Related Articles