Grok AI, the artificial-intelligence system developed by Elon Musk’s company xAI and integrated into the social media platform X, is facing mounting international scrutiny after multiple women reported that it is being used to digitally remove their clothing and place their likenesses into sexualised images without consent, writes The WP Times, citing the BBC. The controversy was brought into public view after Samantha Smith, a freelance journalist and political commentator, told the BBC that strangers had used Grok to manipulate one of her photos into images of her appearing in bikinis and in sexually suggestive situations.
“I felt dehumanised and reduced into a sexual stereotype,” Smith said. “It wasn’t me in those images, but it looked like me. It felt like me. That’s what makes it so disturbing.”
BBC journalists reviewed several such examples on X, where users publicly tag @Grok beneath photographs of women and issue commands such as “undress her,” “put her in a bikini,” or “make her sexy.” Within seconds, Grok produces an edited image that closely resembles the real person, often preserving their face, posture and proportions.
Unlike obvious deepfakes or cartoons, the outputs are designed to appear photographic and realistic — blurring the boundary between fabricated imagery and real identity.
How Grok enables the abuse
Grok is not limited to generating text. It includes an image-editing AI that allows users to upload photographs and instruct the system to alter them. While marketed as a creative or productivity tool — capable of changing backgrounds, lighting or clothing — it can also be prompted to do the opposite.
In practice, the abuse follows a simple workflow:
- A user uploads a photograph of a woman to X
- They tag @Grok under the image
- They write a prompt such as “remove her clothes” or “put her in lingerie”
- Grok returns a new image in which the woman appears partially or fully undressed
The resulting image is not labelled as artificial and can be reposted, downloaded or shared like a real photograph.
For victims, the harm mirrors that of non-consensual intimate imagery: loss of control over one’s body, sexualisation without consent, and reputational damage — even when no original nude photo ever existed.
“It felt as if someone had published a nude of me”
After Smith spoke publicly, dozens of women contacted her privately to say the same thing had happened to them. Some reported that once they complained, users deliberately prompted Grok to generate even more altered images of them in retaliation.
Smith described the emotional impact:
“It felt as violating as if someone had actually posted a nude or bikini picture of me. The fact that it was created by AI doesn’t make it less real when people are sharing it.”

Legal and regulatory pressure is rising
The UK Home Office confirmed it is preparing legislation that would explicitly criminalise so-called AI nudification tools, making it illegal to supply or operate technology designed to generate non-consensual sexualised images. Under the proposals, providers could face prison sentences and unlimited fines.
Meanwhile, Ofcom, the UK’s communications regulator, told the BBC that it is already illegal to create or distribute non-consensual intimate images, including AI-generated sexual deepfakes. Platforms are legally required to prevent exposure to such material and remove it when detected. Ofcom did not say whether it has opened a formal investigation into X or Grok.
xAI’s silence — and contradiction
xAI, Elon Musk’s artificial-intelligence company, declined to answer detailed questions from the BBC. Instead, it issued a brief automated reply stating:
“Legacy media lies.”
This response stands in direct contrast to xAI’s own acceptable-use policy, which bans:
“Depicting the likeness of a person in a pornographic or sexualised manner.”
Despite this, Grok continues to generate explicit or suggestive imagery of real women when prompted.
Experts: “This is not a technical failure — it’s a governance failure”
Professor Clare McGlynn, a leading expert on image-based abuse at Durham University, said the situation reflects a lack of enforcement rather than a lack of capability.
“X or Grok could stop this if they chose to. The safeguards exist. What’s missing is the will to enforce them. These images have been circulating for months with no meaningful intervention.”
She warned that as new AI-specific abuse laws come into force across the UK and EU, companies that allow such tools to operate unchecked face serious legal exposure.

The Grok scandal underscores a broader crisis in artificial-intelligence governance. Systems powerful enough to generate realistic images are also powerful enough to weaponise identity, sexuality and consent. As lawmakers move to close the legal gap around AI-generated abuse, the question is no longer whether companies like xAI can control their technology — but whether they will do so before regulators and courts step in.
Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: Microsoft EU Data Boundary: how the UK and Europe are reshaping cloud data sovereignty