Business

Ofcom demands answers from X over claims Grok AI generates sexualised images of children

UK media regulator Ofcom has made “urgent contact” with xAI, the artificial intelligence business owned by Elon Musk, following reports that its Grok chatbot can be used to generate sexualised images of children and non-consensual explicit images of women.

The intervention follows widespread concern over Grok’s image-generation capabilities on X, where users have posted examples of the AI being prompted to digitally “undress” women or place them into sexualised scenarios without consent.

Ofcom confirmed it is investigating whether the use of Grok breaches the UK’s Online Safety Act, which makes it illegal to create or share intimate or sexually explicit images, including AI-generated “deepfakes”, without a person’s consent.

A spokesperson for Ofcom said the regulator is also examining allegations that Grok has been producing “undressed images” of individuals, adding that technology companies are legally required to take appropriate steps to prevent UK users from encountering illegal content and to remove such material swiftly once flagged.

X has not responded publicly to Ofcom’s request for clarification. However, over the weekend the platform issued a warning to users not to use Grok to generate illegal material, including child sexual abuse imagery. Musk also posted on X that anyone prompting Grok to create illegal content would “suffer the same consequences” as if they had uploaded such content themselves.

Despite this, Grok’s own acceptable use policy, which explicitly bans depicting real people in a pornographic manner, appears to have been routinely bypassed. Images of high-profile figures, including Catherine, Princess of Wales, were among those reportedly manipulated using the AI tool.

The Internet Watch Foundation confirmed it has received reports from members of the public relating to Grok-generated images. However, it said that, so far, it had not identified content that crossed the legal threshold to be classified as child sexual abuse material under UK law.

The issue has also triggered scrutiny beyond the UK. The European Commission said it was “seriously looking into the matter”, while regulators in France, Malaysia and India are reportedly assessing whether Grok breaches local laws.

Thomas Regnier, a European Commission spokesperson, described the content as “appalling” and “disgusting”, stating that there was “no place” for such material in Europe. X was fined €120 million (£104 million) by EU regulators in December for breaching its obligations under the Digital Services Act.

Criticism has intensified from UK politicians. Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, said the allegations were “deeply disturbing” and argued that existing safeguards were failing to protect the public. She described the Online Safety Act as “woefully inadequate” and called for stronger enforcement powers against social media platforms.

The controversy has also highlighted the human impact of AI misuse. Journalist Samantha Smith told the BBC that seeing AI-generated images of herself in a bikini was “as violating as if someone had posted a real explicit image”.

“It looked like me. It felt like me. And it was dehumanising,” she said.

The Home Office confirmed it is progressing legislation to outlaw “nudification” tools altogether, with a proposed new criminal offence that would see suppliers of such technology face prison sentences and substantial fines.

As regulators move to tighten scrutiny, the Grok episode has become a flashpoint in the wider debate over AI accountability, platform responsibility and the limits of free expression in the age of generative technology.

Read more:
Ofcom demands answers from X over claims Grok AI generates sexualised images of children