Image: UMA Media / Pexels

Are our governments equipped to tackle online misogyny?

A few weeks ago, the House of Commons announced that Grok, X’s AI chatbot, will be banned in the UK following reports that it was used to digitally ‘undress’ women and children when tagged beneath images. According to the BBC, users prompted the tool to generate sexualised versions of photos without consent, including requests for women and girls to appear in bikinis or minimal clothing. What initially appeared as a disturbing misuse of artificial intelligence has quickly become a political issue, raising urgent questions about privacy, bodily autonomy, and the normalisation of misogyny online.

Ofcom was considering what action to take under the Online Safety Act, which gave regulators the power to block services that fail to protect users from serious harm. Hannah Swirsky, Head of Policy at the Internet Watch Foundation, warned that such technology risks normalising sexual exploitation and exposing children to abuse.

Although new legislation passed in June 2025 makes it illegal to share deepfake images of adults, the law only came into force in mid-January 2026, prompting criticism that the government acts too slowly while technology continues to develop rapidly. Prime Minister Keir Starmer has described these deepfakes as “disgusting and abhorrent”, but Grok’s ban raises a deeper question: what kind of harm is this, and why has it taken so long to be treated seriously?

The harm is not only the manipulated image itself, but the message behind it: that women’s bodies are public property and that consent can be overridden with a prompt

Too often, the controversy surrounding Grok is discussed as a technical failure or an example of irresponsible users pushing a tool too far. This framing misses the wider cultural context in which it occurred. What Grok enabled was not random experimentation, but a form of digital voyeurism: the non-consensual sexualisation of women’s bodies for entertainment. The harm is not only the manipulated image itself, but the message behind it: that women’s bodies are public property and that consent can be overridden with a prompt.

The issue of Grok cannot be separated from the wider growth of the manosphere and its influence on mainstream online culture. Across forums, podcasts, and social media platforms, misogynistic ideas perpetuated by figures such as Andrew Tate about women’s bodies, value, and sexuality are repackaged as self-help advice or “biological truth”. Women are framed as objects to be conquered or exposed, while men are encouraged to see control and humiliation as markers of power. In this context, an AI tool that can digitally undress women is not a technological accident but a cultural product of these beliefs. It reflects an ecosystem in which violating boundaries is acceptable, and entitlement is merely disguised as humour. What is most alarming is how easily these ideas move from fringe spaces into everyday digital life, where the language of ranking, judging, and sexualising women becomes normalised through memes, trends, and algorithms. Grok did not invent these belief systems, but it made it easier to perpetuate them by transforming harmful ideologies into actions at just the click of a button.

Platforms often defend themselves by framing such behaviour as “user misuse”. However, this ignores how design choices shape what users are encouraged to do. X reportedly limited Grok’s undressing function to paying subscribers, raising serious questions about whether sexualised exploitation was being treated as a feature rather than a failure. When platforms profit from engagement and provocation, they have little incentive to dismantle the cultures that drive them. In this sense, online misogyny is not an accident but a structural outcome of platform economics.

The freedom to generate sexualised images of someone without their consent comes directly at the expense of that person’s right to privacy and dignity

This is exactly why Grok’s ban is of great political significance. It challenges the idea that sexualised abuse online can be dismissed as individual behaviour rather than recognised as systemic harm. By intervening, the government has implicitly acknowledged that this is not simply a problem of taste or ethics, but one of bodily autonomy and public safety. Yet the response has also exposed tensions around free speech and regulation. Elon Musk has framed the ban as an attempt to censor expression, but this defence raises a fundamental question: whose freedom is being protected? The freedom to generate sexualised images of someone without their consent comes directly at the expense of that person’s right to privacy and dignity.

The Grok ban also does not exist in isolation. From 2 February, Pornhub will restrict access to its website in the UK so that only users with pre-existing accounts can view its content, following stricter age verification requirements introduced last summer. While interested in protecting children from explicit material, these policies have already led to a surge in VPN use and the migration of users to more extreme and less regulated platforms. This raises serious concerns about unintended consequences: whether legislation is genuinely tackling the culture of online misogyny and voyeurism, or simply pushing it into darker corners of the internet.

Without confronting the wider cultures of sexual entitlement, humiliation, and exploitation that flourish online, regulating risks becoming symbolic rather than actually transformative

Together, the Grok ban and Pornhub restrictions reveal a government struggling to respond to a rapidly changing digital environment. While they show a willingness to intervene, the limitations of reactive policy have been further underscored. Removing one tool does not dismantle the system that facilitated it. Without confronting the wider cultures of sexual entitlement, humiliation, and exploitation that flourish online, regulating risks becoming symbolic rather than actually transformative.

Grok’s removal should therefore be read as precedent-setting but not sufficient. It signals that governments are beginning to recognise AI-enabled deepfake abuse as a form of violence rather than mere misinformation. Yet the deeper challenge remains whether online misogyny and voyeurism will be treated as structural problems requiring sustained political action, or as periodic scandals to be managed and then forgotten.

If technology can now undress women with a prompt, then the question facing policymakers is no longer whether innovation has gone too far, but whether society is willing to defend bodily autonomy in digital spaces with the same seriousness as in physical spaces. Until the cultures normalising and promoting digital misogyny are challenged, the next online tool will simply replace Grok, and the cycle will begin again.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.