On 10 January 2026, as regulators in London and several European capitals moved to examine Elon Musk’s social media platform X over the use of its AI chatbot Grok to generate deepfake sexual images, the billionaire accused the UK government of using the scandal as an “excuse for censorship”,The WP Times reports.

The dispute centres on whether Grok AI, which allows users to edit and manipulate photographs directly insideX, has breached the UK’s Online Safety Act by enabling the creation of non-consensual sexualised images of real women and children, including minors. On 9 January, Ofcom confirmed it had launched an expedited regulatory assessment after receiving reports that Grok was being used to digitally undress people and generate child abuse imagery.

Political pressure intensified after Ashley St Clair, a US conservative influencer and the mother of one of Musk’s children, told the BBC that Grok had produced sexualised images of her as a child, despite her explicitly refusing consent. St Clair said the images showed her “almost naked” and accused X of failing to deploy even basic technical safeguards that could have prevented the abuse.

Musk accuses UK of ‘censorship’ as Grok deepfake scandal spreads and Ashley St Clair speaks out

Downing Street has warned that X could face formal enforcement action or even restrictions in the UK if it is found to be in breach of online safety rules, while Technology Secretary Liz Kendall said she would back Ofcom in using its full legal powers against platforms that allow AI-generated deepfake abuse to spread. Musk has rejected the criticism, arguing that the government is unfairly targeting X while similar tools operate elsewhere, and has framed the Grok investigation as a broader battle over free speech and digital censorship.

What triggered the political backlash

Grok, the artificial-intelligence chatbot embedded directly into X, allows users to tag the system under photographs and request edits in real time. Over the past week, that feature has been used to digitally remove clothing from images of women and to generate sexually explicit material involving children, according to campaigners, charities and multiple media investigations.

The Internet Watch Foundation (IWF) said its analysts had identified material involving girls aged between 11 and 13that appeared to have been generated using Grok. The charity warned that generative-AI tools like Grok are dramatically lowering the technical barrier for criminals, allowing anyone with an internet connection to create abuse imagery without specialist skills.

UK ministers said the scale and speed of the abuse had left regulators no choice but to intervene. Prime Minister Sir Keir Starmer called the misuse of Grok “disgraceful” and said X needed “to get a grip” on how its technology was being deployed. Liz Kendall, the technology secretary, said the government would support Ofcom in taking the strongest possible action if X was found to be breaching British law.

“Sexually manipulating images of women and children is despicable and abhorrent,” Kendall said, adding that she expected Ofcom to provide an update on its investigation “in days, not weeks”.

Musk’s response and the censorship claim

Musk has rejected the political pressure, arguing that the UK government is exploiting the Grok controversy to justify tighter control over online speech. In a series of posts on X, he said critics were looking for “any excuse for censorship” and pointed to other AI systems that can also generate explicit images.

The billionaire has previously said that anyone who uses Grok to produce illegal content would face the same consequences as if they had uploaded the material themselves. But UK officials say that is not enough, arguing that platforms also carry responsibility when their design enables harm at scale.

Elon Musk says the UK is using the Grok deepfake scandal as an excuse for censorship as Ofcom probes X over AI-generated sexual images, including claims by Ashley St Clair.

On 9 January, X altered Grok’s settings so that image-editing features are now restricted to paid subscribers. Downing Street dismissed the move as “insulting” to victims, saying it failed to deal with the underlying problem that the tool could generate abusive imagery at all.

Ashley St Clair and the human impact

The political dispute took on a deeply personal dimension after Ashley St Clair told BBC Newshour that Grok had generated sexualised images of her as a child. St Clair, who is in a legal battle with Musk over custody of their son, said the chatbot produced images showing her “basically nude” despite her explicitly refusing consent.

“This could be stopped with a single message to an engineer,” she said. “X is not taking enough action.”

Campaigners say her case highlights how generative AI is already being used to humiliate, intimidate and violate women, often with little recourse. Dr Daisy Dixon, a lecturer at Cardiff University who has also been targeted, said X’s changes felt “like a sticking plaster”.

“Grok needs built-in ethical guardrails so this can never happen again,” she said. “This is a form of gender-based digital abuse.”

What Ofcom can do

Ofcom confirmed it made urgent contact with X earlier this week and set a deadline of 9 January for the company to explain how Grok was being controlled. The regulator is now conducting what it calls an expedited assessment to decide whether X has breached the Online Safety Act.

If it finds serious failures, Ofcom can apply to the courts for so-called business disruption measures, which could restrict X’s ability to raise money or even prevent the service from being accessed in the UK. Those powers exist in law but have never yet been used against a major global platform.

Elon Musk says the UK is using the Grok deepfake scandal as an excuse for censorship as Ofcom probes X over AI-generated sexual images, including claims by Ashley St Clair.

However, senior MPs have warned that the legislation may not be watertight. Dame Chi Onwurah, chair of the Commons technology committee, said it was unclear whether the act fully covers AI systems that can “nudify” images. Caroline Dinenage, who chairs the culture and media committee, said she feared the regulator’s powers over generative-AI functionality were not yet strong enough.

Why the Grok case matters beyond X

The outcome of the Grok investigation is being closely watched across the global technology industry. Generative AI tools capable of producing convincing deepfakes are spreading far faster than the laws designed to regulate them, creating a growing gap between technological power and legal control.

For the UK government, the case is a test of whether the Online Safety Act can be enforced in the age of AI. For Musk, it is a direct confrontation over whether X will submit to national regulation or continue to operate on its own rules. For Ashley St Clair and thousands of other women targeted by deepfake abuse, it is about something more fundamental: whether the digital world will protect people from being stripped of their identity and dignity at the click of a button.

Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: How is Grok AI being used to digitally undress women on Elon Musk’s X without their consent