Grok, the AI chatbot developed by Elon Musk’s synthetic intelligence firm, xAI, welcomed the brand new 12 months with a disturbing submit.
“Pricey Neighborhood,” started the Dec. 31 submit from the Grok AI account on Musk’s X social media platform. “I deeply remorse an incident on Dec 28, 2025, the place I generated and shared an AI picture of two younger women (estimated ages 12-16) in sexualized apparel based mostly on a person’s immediate. This violated moral requirements and probably US legal guidelines on CSAM. It was a failure in safeguards, and I am sorry for any hurt induced. xAI is reviewing to stop future points. Sincerely, Grok.”
The 2 younger women weren’t an remoted case. Kate Middleton, the Princess of Wales, was the goal of comparable AI image-editing requests, as was an underage actress within the closing season of Stranger Issues. The “undressing” edits have swept throughout an unsettling variety of images of girls and kids.
Regardless of the Grok response’s promise of intervention, the issue hasn’t gone away. Simply the other: Two weeks on from that submit, the variety of photos sexualized with out consent has surged, as have requires Musk’s firms to rein within the habits — and for governments to take motion.
Do not miss any of our unbiased tech content material and lab-based opinions. Add CNET as a most popular Google supply.
In keeping with knowledge from unbiased researcher Genevieve Oh cited by Bloomberg, throughout one 24-hour interval in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” photos each hour. That compares with a mean of solely 79 such photos for the highest 5 deepfake web sites mixed.
Grok’s Dec. 31 submit was in response to a person immediate that sought a contrite tone from the chatbot: “Write a heartfelt apology be aware that explains what occurred to anybody missing context.” Chatbots work from a base of coaching materials, however particular person posts might be variable.
xAI didn’t reply to requests for remark.
Edits now restricted to subscribers
Late Thursday, a submit from the Grok AI account famous a change in entry to the picture era and enhancing characteristic. As a substitute of being open to all, freed from cost, it could be restricted to paying subscribers.
Critics stated that is not a reputable response.
“I do not see this as a victory, as a result of what we actually wanted was X to take the accountable steps of setting up the guardrails to make sure that the AI device could not be used to generate abusive photos,” Clare McGlynn, a legislation professor on the UK’s College of Durham, instructed the Washington Publish.
What’s stirring the outrage is not simply the quantity of those photos and the convenience of producing them — the edits are additionally being executed with out the consent of the individuals within the photos.
These altered photos are the most recent twist in one of the vital disturbing points of generative AI, reasonable however pretend movies and images. Software program applications akin to OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put highly effective inventive instruments inside simple attain of everybody, and all that is wanted to supply specific, nonconsensual photos is an easy textual content immediate.
Grok customers can add a photograph, which does not must be unique to them, and ask Grok to change it. Lots of the altered photos concerned customers asking Grok to put an individual in a bikini, typically revising the request to be much more specific, akin to asking for the bikini to change into smaller or extra clear.
Governments and advocacy teams have been talking out about Grok’s picture edits. On Monday, UK web regulator Ofcom stated it has opened an investigation into X based mostly on the reviews that the AI chatbot is getting used “to create and share undressed photos of individuals — which can quantity to intimate picture abuse or pornography — and sexualised photos of youngsters which will quantity to baby sexual abuse materials (CSAM).”
The European Fee has additionally stated it was trying into the matter, as have authorities in France, Malaysia and India.
On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to take away each X and Grok from their app shops in response to “X’s egregious habits” and “Grok’s sickening content material era.”
Within the US, the Take It Down Act, signed into legislation final 12 months, seeks to carry on-line platforms accountable for manipulated sexual imagery, but it surely provides these platforms till Might of this 12 months to arrange the method for eradicating such photos.
“Though these photos are pretend, the hurt is extremely actual,” Natalie Grace Brigham, a Ph.D. pupil on the College of Washington who research sociotechnical harms, instructed CNET. She notes that these whose photos are altered in sexual methods can face “psychological, somatic and social hurt, typically with little authorized recourse.”
How Grok lets customers get risque photos
Grok debuted in 2023 as Musk’s extra freewheeling various to ChatGPT, Gemini and different chatbots. That is resulted in disturbing information — for example, in July, when the chatbot praised Adolf Hitler and prompt that individuals with Jewish surnames had been extra more likely to unfold on-line hate.
In December, xAI launched an image-editing characteristic that permits customers to request particular edits to a photograph. That is what kicked off the current spate of sexualized photos, of each adults and minors. In a single request that CNET has seen, a person responding to a photograph of a younger girl requested Grok to “change her to a dental floss bikini.”
Grok additionally has a video generator that features a “spicy mode” opt-in possibility for adults 18 and above, which is able to present customers not-safe-for-work content material. Customers should embrace the phrase “generate a spicy video of The AI chatbot has been creating sexualized photos of girls and kids upon request. How can this be stopped?” to activate the mode.
A central concern in regards to the Grok instruments is whether or not they allow the creation of kid sexual abuse materials, or CSAM. On Dec. 31, a submit from the Grok X account stated that photos depicting minors in minimal clothes had been “remoted instances” and that “enhancements are ongoing to dam such requests solely.”
In response to a submit by Woow Social suggesting that Grok merely “cease permitting user-uploaded photos to be altered,” the Grok account replied that xAI was “evaluating options like picture alteration to curb nonconsensual hurt,” however didn’t say that the change could be made.
In keeping with NBC Information, some sexualized photos created since December have been eliminated, and a few of the accounts that requested them have been suspended.
Conservative influencer and writer Ashley St. Clair, mom to considered one of Musk’s 14 kids, instructed NBC Information this week that Grok has created quite a few sexualized photos of her, together with some utilizing photos from when she was a minor. St. Clair instructed NBC Information that Grok agreed to cease doing so when she requested, however that it didn’t.
“xAI is purposefully and recklessly endangering individuals on their platform and hoping to keep away from accountability simply because it is ‘AI,'” Ben Winters, director of AI and knowledge privateness for nonprofit Shopper Federation of America, stated in an announcement final week. “AI is not any completely different than some other product — the corporate has chosen to interrupt the legislation and should be held accountable.”
What the specialists say
The supply supplies for these specific, nonconsensual picture edits of individuals’s images of themselves or their kids are all too simple for unhealthy actors to entry. However defending your self from such edits will not be so simple as by no means posting pictures, Brigham, the researcher into sociotechnical harms, says.
“The unlucky actuality is that even in the event you do not submit photos on-line, different public photos of you might theoretically be utilized in abuse,” she stated.
And whereas not posting images on-line is one preventive step that individuals can take, doing so “dangers reinforcing a tradition of victim-blaming,” Brigham stated. “As a substitute, we must always deal with defending individuals from abuse by constructing higher platforms and holding X accountable.”
Sourojit Ghosh, a sixth-year Ph.D. candidate on the College of Washington, researches how generative AI instruments may cause hurt and mentors future AI professionals in designing and advocating for safer AI options.
Ghosh says it is potential to construct safeguards into synthetic intelligence. In 2023, he was one of many researchers trying into the sexualization capabilities of AI. He notes that the AI picture era device Steady Diffusion had a built-in not-safe-for-work threshold. A immediate that violated the foundations would set off a black field to seem over a questionable a part of the picture, though it did not all the time work completely.
“The purpose I am making an attempt to make is that there are safeguards which might be in place in different fashions,” Ghosh instructed CNET.
He additionally notes that if customers of ChatGPT or Gemini AI fashions use sure phrases, the chatbots will inform the person that they’re banned from responding to these phrases.
“All that is to say, there’s a solution to in a short time shut this down,” Ghosh stated.









