1. The Grok incident
What happened?
In late 2025, reports emerged that Grok, a new AI chatbot integrated into Elon Musk's social platform X (formerly Twitter), was being misused to generate sexualised "undressed" images of real people without their consent - including images of minors. Users were able to prompt Grok to produce photo-realistic deepfakes stripping clothes off images of women and children, leading to a deluge of non-consensual intimate images on X. What started as an "anti-woke" novelty quickly turned into a global scandal over AI-generated sexual deepfakes.
UK enforcement response
The backlash was swift. Ofcom, the UK's online safety regulator, contacted X in early January 2026 demanding an explanation of how such content was able to circulate. Unsatisfied with the initial response, on 12 January Ofcom opened a formal investigation into X under the UK's Online Safety Act ("OSA"). Ofcom cited deeply concerning reports that the Grok account had been used to create and share "undressed images of people - which may amount to intimate image abuse or pornography - and sexualised images of children that may amount to child sexual abuse material" ("CSAM").1
The investigation will examine whether X failed to comply with its legal duties under the OSA to assess illegal content risks before launching Grok; prevent "priority" illegal content like non-consensual intimate images and CSAM from being seen and shared; swiftly remove any illegal content; protect users' privacy and safeguard children from seeing pornography, including through effective age assurance measures. What the investigation won't address however, is the use of Grok to create the images, with the relevant offence not yet in force or included as a "priority offence" within scope of the OSA. Steps are being taken by the UK Government to address this apparent loophole in the scope of the Act, discussed further below.
Though Ofcom has been urged to move quickly in respect of Grok, there are a number of procedural steps it will need to follow in conducting its investigation, reaching any conclusions regarding breaches of the OSA, allowing X an opportunity to respond, and finally imposing proportionate penalties. Ofcom has the power to impose fines of up to 10% of a company's global revenue and can seek to block access to the service in the UK for serious breaches. The enforcement of any penalties in particular may test Ofcom's impact on US domiciled entities whose UK activities bring them into scope of the OSA. This has been criticised for eroding US free speech laws and is the subject of legal challenge before the US courts in another case.
Grok's reaction
In light of Ofcom's activity and the general public outcry to the technology and its uses, on 14 January, X took steps to restrict the image-generation function on Grok. The company confirmed it had "implemented technological measures" the effect of which would be to limit Grok's capability to generate some sexualised images. The technological adaptation was announced on the X platform where a post set out that Grok's functionality no longer included "the editing of images of real people in revealing clothing such as bikinis" or the creation of images containing "real people in bikinis, underwear and similar attire" in jurisdictions where such activity is illegal. This move will do much to take the immediate heat out of the situation, and will mitigate Grok's exposure to OSA breaches going forward. However, Ofcom's investigation into X's compliance in advance and immediately after Grok's launch will continue, with focus no doubt on why these use limitations were not implemented as part of the initial design.
A global issue
The international fall-out from the Grok incident illustrates the cross-border risks of generative AI, including the willingness of regulators to take action against service providers located outside of domestic borders where services create the risk of harm in-country. In India, the government issued a formal notice to X for failing to meet its due diligence obligations and demanded steps to stop the spread of sexually explicit AI content. French prosecutors opened a criminal investigation following complaints about deepfake imagery. Indonesia and Malaysia went further by blocking access to Grok altogether, with Indonesian ministers describing non-consensual AI sexual images as a serious violation of human rights and public safety. Canadian officials also condemned AI enabled sexual abuse, whilst US lawmakers called for action.
2. Ofcom's expectations for AI-enabled services
When does an AI chatbot fall within the OSA?
Grok's integration into X raises an important question - when does an AI chatbot service become subject to the OSA? The OSA applies to certain categories of online services: "user-to-user services" (platforms that allow users to share content with each other) and search services. In essence, if a platform lets users post or exchange content (text, images, videos, etc) it likely falls under the OSA's scope and must assess illegal content risks and adhere to a range of safety duties. X is a user-to-user service and the presence of an AI chatbot on the platform does not alter that assessment. In Ofcom's words, "a chatbot that enables users to share text, images or videos generated by the chatbot with other users will be a user-to-user service...whether this chatbot functionality is the main feature of just part of a bigger service such as a social media platform".2 As a chatbot which can produce pornographic or illegal images, integrated into a social network and where its outputs can be posted or viewed by others, Grok is clearly in scope.
How should AI-generated content be treated?
More broadly, any AI-generated content that users share on a regulated service is treated as user-generated content under the law. It does not matter whether a piece of illegal or harmful content was created by a human or by an algorithm, if it's on a user-to-user platform and visible to others, the platform has a duty to address it. Ofcom has been clear on this point to dispel any notion that AI outputs fall into a grey area - if users can upload, share, or interact around AI-generated material, those features bring the service within the OSA's scope.
Similarly, AI generated sexualised images of children or intimate images of adults created without their consent, including "undressed" or deepfake imagery, fall within definitions of CSAM and non-consensual intimate images and are treated as priority harms under Ofcom's illegal content regime. It does not matter that an image is synthetic.
Risk mitigation expectations
Ofcom has been clear that AI tools must be governed like any other high-risk feature on a platform. Companies are expected to assess how generative AI could be misused, especially where it can be used to commit or facilitate criminal offences or increase the likelihood that users may encounter illegal content.
That means structured governance, assessing risks and testing models before deployment, and ongoing monitoring once a tool is live. For image generation, Ofcom will expect to see guardrails on prompts and outputs, restrictions on the manipulation of real people's images, strong age assurance for any sexual content, and systems that detect and block illegal material.
Platforms must also provide effective reporting and takedown processes. Victims need clear ways to report AI-generated abuse and to have content removed quickly. Terms of service and user polices should make it explicit that creating or sharing non-consensual intimate images, whether real of AI-generated, is prohibited.
3. Implications and action points for AI deployers
Keep up with legislative changes
The Grok controversy triggered a fast political response. The UK government announced that it would immediately bring into force new criminal offences under the Data (Use and Access) Act that make it illegal to create or request non-consensual intimate images, including AI-generated deepfakes.
These new offences close a gap by criminalising the act of using an AI tool to create a fake sexual image of someone without their consent, even if it is never posted online. Ministers also confirmed plans to criminalise the supply of "nudification" apps and similar tools, expanding legal responsibility upstream to the companies building and providing this technology. These new criminal offences sit alongside and - in respect of image creation - will expand the OSA, giving police and regulators a wider set of powers through which to address harm.
The Grok case shows how high-profile AI failures can drive rapid legislative change. Similar targeted reforms may follow in other high-risk AIO areas, from elections to AI-generated disinformation.
Assess criminal risks and related controls
For social media platforms and online services, this case should prompt an immediate review of how AI features are being designed and governed, and an assessment of the criminal risks that may be associated with their use. Image generation tools in particular should be tested against real-world abuse scenarios, and risk assessments updated to reflect the possibility of illegal content being created or shared.
For businesses and public bodies using third-party generative AI tools, there are important contractual and due diligence questions. Organisations need to understand how their suppliers manage safety, how incidents will be handled, and who carries regulatory and reputational risk if something goes wrong.
AI safety enforcement in the UK is no longer theoretical - it is now happening and companies deploying generative AI will be judged not on their intentions, but on how they mitigate the real-world risks their tools can create.
1 https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-launches-investigation-into-x-over-grok-sexualised-imagery
2 https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/open-letter-to-uk-online-service-providers-regarding-generative-ai-and-chatbots

.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)












