A girl has instructed the BBC she felt “dehumanised and decreased right into a sexual stereotype” after Elon Musk’s AI Grok was used to digitally take away her clothes.
The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to undress ladies to make them seem in bikinis with out their consent, in addition to placing them in sexual conditions.
XAI, the corporate behind Grok, didn’t reply to a request for remark, apart from with an automatically-generated reply stating “legacy media lies”.
Samantha Smith shared a submit on X about her picture being altered, which was met with feedback from those that had skilled the identical – earlier than others requested Grok to generate extra of her.
“Girls will not be consenting to this,” she mentioned.
“Whereas it wasn’t me that was in states of undress, it appeared like me and it felt like me and it felt as violating as if somebody had really posted a nude or a bikini image of me.”
A Dwelling Workplace spokesperson mentioned it was legislating to ban nudification instruments, and underneath a brand new felony offence, anybody who provided such tech would “face a jail sentence and substantial fines”.
The regulator Ofcom mentioned tech corporations should “assess the danger” of individuals within the UK viewing unlawful content material on their platforms, however didn’t verify whether or not it was at present investigating X or Grok in relation to AI photos.
Grok is a free AI assistant – with some paid for premium options – which responds to X customers’ prompts after they tag it in a submit.
It’s typically used to provide response or extra context to different posters’ remarks, however individuals on X are additionally capable of edit an uploaded picture via its AI picture modifying function.
It has been criticised for permitting customers to generate photographs and movies with nudity and sexualised content material, and it was beforehand accused of making a sexually specific clip of Taylor Swift.
Clare McGlynn, a regulation professor at Durham College, mentioned X or Grok “may forestall these types of abuse in the event that they needed to”, including they “seem to get pleasure from impunity”.
“The platform has been permitting the creation and distribution of those photos for months with out taking any motion and we have now but to see any problem by regulators,” she mentioned.
XAI’s personal acceptable use coverage prohibits “depicting likenesses of individuals in a pornographic method”.
In a press release to the BBC, Ofcom mentioned it was unlawful to “create or share non-consensual intimate photos or youngster sexual abuse materials” and confirmed this included sexual deepfakes created with AI.
It mentioned platforms akin to X had been required to take “acceptable steps” to “scale back the danger” of UK customers encountering unlawful content material on their platforms, and take it down rapidly after they grow to be conscious of it.
Extra reporting by Chris Vallance.









