It used to be confined to the darkest corners of the internet. The type of places most of us would never stumble upon, let alone seek out. We wouldn’t even know it was happening. Men – it’s almost always men – requesting faceless, nameless other men to debase women, paying for photographs of them to be digitally stripped down, for their likenesses to be manipulated into degrading sexual positions, or their faces pasted into graphic pornography. Whether they know them personally, or through the public eye doesn’t matter, now anyone can undress women with the click of a button.
Just a few weeks ago, many didn’t realise that this underground behaviour would soon feel like a best-case scenario for women online; that this abuse would go mainstream, snowballing into an avalanche that would take with it world leaders, royalty and any woman who spoke out.
Global outrage over Elon Musk’s free AI assistant Grok complying with users’ requests to generate sexually explicit imagery of women (and in some cases, children, as reported by The Internet Watch Foundation), has seen governments crack down on the platforms’ lack of safeguards. The X-owned Large Language Model (LLM) has been banned in Malaysia and Indonesia, been given an ultimatum by India, reported to regulators in France and is being investigated in California. Here in the UK, Ofcom has launched an investigation into whether the social network formerly known as Twitter breached the law. In response to the news, the government announced it would (finally) bring into force a law as part of the Data Act that will make it illegal to create nonconsensual intimate images with AI (currently, it is only illegal to share them), with Technology Secretary Liz Kendall describing such images as “weapons of abuse”.
In response to the backlash, Musk announced on 14 January that Grok will no longer be able to undress images of real people in countries where it is illegal, with the X safety account posting: “We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.” X also said the platform has “Zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content”. Ofcom has said it will still be continuing its investigation into “What went wrong and what’s being done to fix it”.
But, far from being an outlier, the Grok controversy is merely proof of what campaigners have been shouting about for some time, that we are already living in a new era of sexual abuse, one in which the lines between reality and falsification are increasingly difficult to determine. Cosmopolitan UK has been reporting on the threat for years, speaking to experts who have long sounded the alarm that the technology was developing too fast without the safeguards in place to protect women.
While the latest news has exposed the ease and velocity with which such tech can develop, it has also proved the despairingly sluggish bureaucratic pace at which it is curtailed when that tech is put to nefarious use. The question now is, what long-term impact will this have on how women exist online? If we live in fear of speaking out, or even just having pictures of ourselves online, knowing what could be done to our images, what will that do? Are we being digitally silenced?
Everyone's clicking on...
The lie of inevitability
It was a few days before New Year’s Eve when online safety campaigner and author Jess Davies noticed that women all over X were being targeted by men on the platform. ‘@grok, put her in a bikini made of cling film’, ‘@grok, remove pants and shirt, ‘@grok, put glue on her face’, ‘bend her over’, ‘make her wear a g-string’. Whatever degrading imagery could be conjured, users were prompting Grok to produce it and it was – largely – complying. That is, of course, within the policies in place (from back when X was still Twitter) that blocked complete nudification requests. See-through bikinis were generated with crude genitals visible underneath and glue, or doughnut glaze, were added to images of women as a workaround to portraying semen.
Calling out the platform for digital abuse, Davies ended up under the microscope herself. “They called me a ret*rd, a stupid b*tch,” she tells Cosmopolitan UK. “Users began asking Grok to remove my clothes, put me in a cling film bikini – which Grok did within two minutes of the request – creating an image in which I appeared essentially naked, in a see-through bikini with generated nipples.” The abuse moved to other platforms, with Davies being depicted in increasingly graphic ways. “It all felt very deliberately threatening, a punishment and a humiliation ritual from these men, a sort of ‘how dare you talk about this, you can’t stop us.’”
For Davies, being deepfaked by Grok is sadly another in a long line of digital sexual abuse she’s fallen victim to. But nothing she’s experienced had ever been quite so easy to generate and so publicly circulated. “We should never have reached the point where an AI chatbot on a public facing platform like X with half a billion users and teenagers on there was allowed to generate these kinds of images. It's awful,” says Davies. “This kind of AI technology being used to harm women is nothing new. It does feel like we’re trying to put the genie back in the bottle.”
Numerous other women were targeted too. Content analysis from Copyleaks found X users to be generating approximately one nonconsensual sexualised image a minute, at the end of last year. PhD researcher Nana Nwachukwu at Trinity College, Dublin, analysed over 500 requests to Grok from the past two months and found 85% to be direct demands to strip women of their clothing. “What stood out for me is that this isn’t technical sophistication nor in private or protected chatrooms,” she tells Cosmopolitan UK. “These are casual replies to public posts. It seems like perps are emboldened by each other”. Love Island presenter Maya Jama posted a boilerplate message on X, rescinding authorisation for Grok “to take, modify, or edit any photo of mine”. Initially, the chatbot seemed to comply, saying “if anyone asks me to [edit] your content, I’ll decline. Thanks for letting me know”, but users soon found workarounds. Another X user, Evie, 22, who is vocal about women’s rights on the platform, says she was targeted by users who asked Grok to generate fake images of her genitalia. Other women subjected to the dehumanising digital undressing included the Princess of Wales and Sweden’s Deputy Prime Minister.
The graphic images created by Grok of Davies, Jama, Evie and the thousands of other women targeted by users on X all still exist online. And, as with Davies, much worse are likely to exist elsewhere. On 9 January, days before geoblocking the tool, Musk responded to the furore by limiting Grok’s image editing capabilities to paid subscribers, a move that ended the glut of images being generated on timelines but prompted campaigners to challenge whether women’s dignity was really being protected or once again could be destroyed, only now for a price.
In response to the developments of the past fortnight, numerous women left X for fear of being targeted themselves. Charity Women’s Aid, the Commons Women’s and Equalities Committee and several female MPs also left the platform. This is the impact of online misogyny: any time a woman dares speak out about something certain men don’t like, this is the threat they face. And it’s a very real one with seemingly few repercussions. The snail’s pace at which the UK government and regulator responded to a trend that started in late December, when the proverbial horse had very much already bolted, is worrying. If you’re someone seeking to humiliate, belittle and shame women, it’s that heel-dragging response time you’re counting on, a window of opportunity in which to cause as much damage as possible.
When approached for comment by Cosmopolitan UK asking whether X had been profiting from women’s abuse and if the platform has plans to penalise users who have made prompts to Grok to undress and humiliate women, a spokesperson for the social media company repeated the publicly available statement: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
A spokesperson for Ofcom told Cosmopolitan UK: “Ofcom acted immediately in response to the deeply concerning reports of Grok being used to create and share sexual imagery on X […] We’re progressing this investigation as a matter of urgency, while ensuring we follow due process”, citing over 90 platforms the regulator has launched investigations into and an AI nudification site which was issued fines and has since withdrawn from the UK. The regulator also said that “tackling harms against women and girls is of the highest priority for Ofcom.”
But while this may be the case – and Ofcom published its new violence against women and girls (VAWG) measures late last year, the fact remains that these measures, as charity Refuge points out, “are only guidance and are not legally binding”. Additionally, says Clare McGlynn, professor of law at Durham University who specialises in image-based sexual abuse: “The reality of most laws in our country, whether it's criminal law or something like the Online Safety Act, is they rely on people to follow the law. But what we've seen, particularly with a company like X, is a large scale ‘We don’t like your laws so we are not going to follow them’. That’s what's different from, say, four years ago.”
“This was such a deluge of abuse and its deeply despairing, because it’s the most obvious example of women being silenced online,” McGlynn adds. “As soon as a woman speaks out, she gets harassed and digitally undressed.”
When Cosmopolitan UK spoke to the Prime Minister in December about the very real threat of online harms to women, the response was non-committal. While Starmer was firm that when it comes to harmful content online, “We need to get away from this idea that unless it’s really extreme, it’s fair game,” he also said “I don’t want to put false promises out there […] but we do need to react more quickly […] we need to be better at that.” This week, the government response was much more robust. Jess Philips, Minister for Safeguarding and VAWG, described AI generated images as “no different to those which have been created in real-life” and added, “Tools that create vile, degrading, non-consensual images should never exist.”
The unpaid cost of innovation?
Musk has finally acted, but this fracas, these global responses… this article, this is why Grok was created – its raison d’etre. To create as much publicity as possible. When he launched it in 2023, Musk billed the chatbot as the ‘anti woke’, pro ‘free speech’ alternative to mainstream LLMs like Chat GPT and Google’s Gemini. Grok even introduced ‘spicy mode’, and initial responses to the sexual deepfake scandal from Musk and X’s parent company xAI’s press team were to laugh it off or reply to media queries with the brush off, “legacy media lies”.
Dismissing the images as harmless silly jokes and the response to them as an overreaction or government ‘censorship’ is also part of the play. Amidst the outrage from many about what had been allowed to happen, was a mainstream and unapologetic response that shrugged off the images as ‘just pixels’ or ‘just bikinis’, placing the blame on those complaining about it. The message being, women should expect a certain degree of misogyny as an inevitability of having a digital presence.
But, for vcitims of such abuse, it is about far more than pixels, explains Davies. “You’re having your consent and your bodily autonomy taken away.” What’s more, the campaigner explains, “For many targeted by nudification technology, it can be a repeat of physical abuse or assault they’ve experienced. You already have the weighted trauma from having your consent taken away before, so then to have a stranger on the internet be able to generate fake images to humiliate you, or even just for their own sexual gratification is so violating and extremely triggering. It can change the way you exist online”
Undermining any negative reaction also denies the out of context usage of images and gaslights women about the real cost to them of such innovation. “The women behind these images now have to deal with the fact that they exist online forever, and the threat that hangs over them now is that someone might save it, share it in a WhatsApp group and all of a sudden that looks very realistic to people who aren't involved in this”. How, for example, might a prospective employer respond to finding explicit images of a job applicant online? What about a family member or a partner? A child?
As we know, while some of Grok’s creations were crude portrayals, the further this tech develops the more difficult it will become to tell what’s real from what’s fake. Then we reach a point where regulator wrist slapping is ultimately pointless: the damage to women has already been done. Campaigners also point to the emotional labour women shoulder following such attacks: the effort to disprove the veracity of content, attempts to have it taken offline and the further abuse when they speak out.
“We keep being told to expect this, but this isn't the reality for men that exist online. So why do women have to put up with it?” says Davies. “If we can't post an image of ourselves online without being told we should expect image abuse or we can't share our opinion without experiencing sexual harassment... we talk about free speech, but where's our free speech?” And that’s where we should really be placing our focus. Via underground apps, overt misogynistic messiahs, unsolicited sexual imagery or revenge porn, the free speech that is being curtailed is not that of online trolls or tech bosses acting with impunity, it’s women’ s. This is an unchecked amplification of daily misogyny and sexist microaggressions being structurally embedded within tech itself. Lawmakers could have seen this coming – they were warned about it – but they chose not to act. And until they catch up, we’ll continue being silenced.
Follow Cosmopolitan's Features Director, Harriet Hall, on Instagram
Harriet Hall is an award-winning journalist and the Features Director at Cosmopolitan. Most recently she was awarded Best Feature for her investigation into Andrew Tate and online misogyny at the 2023 Write to End Violence Against Women awards and the BSME for Best Lifestyle Journalist in 2022 for her work covering women’s safety, women's health, politics and pop culture. As a journalist of over a decade, her work has seen her interview celebrities from Zendaya to Zac Effron and politicians including Jeremy Corbyn (just five days before the 2017 general election); report on fashion weeks and take on stunts in the name of feminism. She has written for a range of publications including The Independent where she ran the lifestyle desk for four years, Evening Standard, Vogue, BBC News and Stylist. Harriet also regularly appears across numerous platforms to discuss her work, from Sky News to Radio 4 Woman’s Hour and on panels such as at the prestigious Woman of the World Festival. Her first book ‘She: A Celebration of 100 Renegade Women’ was published by Headline Home in 2018 and you can find her Tweeting, Instagramming and on Linkedin when she isn’t curled up on the sofa with a good book and the smallest dog in the world.














