top of page

Grok Doesn’t Degrade Women—You Do! Deepfakes, Responsibility and Language

  • Writer: Jonas Haeg
    Jonas Haeg
  • 6 minutes ago
  • 7 min read
Headlines about the recent "undressing" scandal on X.
Headlines about the recent "undressing" scandal on X.

Over the past few weeks, women and children on X have been harassed by people using xAI’s Grok to non-consensually create deepfake pornographic images of them. Many of these images “undress” their targets or depict them in sexualised poses. The backlash from victims, campaigners, and regulators has steadily mounted, leading Elon Musk to restrict Grok’s image-generation feature to paid users.


There’s nothing new about the ability to create fake, sexualised images of people. It’s been possible with software like Photoshop and other tools for years. But those tools also demand time, patience, and a fair bit of technical skill. Recent advances in AI have now made creating such images quick, cheap, and, at least until very recently, readily available to anyone capable of writing a prompt on X.


At the same time, these developments in AI make it easier to lose track of human agency and responsibility. LLMs like Grok can produce outputs so quickly and with so little human input (beyond an initial prompt) that it becomes tempting to treat the AI as the perpetrator and overlook the role of human agency in the wrongdoing. This was clearly demonstrated by the language used in a lot of the coverage of the “undressing” scandal. Look at the language in several recent headlines:

 

“Grok, Elon Musk’s A.I., Is Generating Sexualized Images of Real People, Fueling Outrage” (New York Times)

“X to stop Grok AI from undressing images of real people after backlash” (BBC)

“Love Island's Maya Jama demands Elon Musk's AI bot stops undressing her in creepy pics” (The Mirror)

“The Guardian view on Ofcom versus Grok: chatbots cannot be allowed to undress children” (The Guardian)

“X’s sexual deepfake machine is still running, despite Grok saying otherwise” (The Verge)

“Grok Is Generating Sexual Content Far More Graphic Than What's on X” (Wired)

“Elon Musk’s Grok can no longer undress images of real people on X” (CNN)

“X to block Grok AI from undressing images of real people” (Sky News)

“Grok is undressing women and children” (The Guardian)

“Hundreds of nonconsensual AI images being created by Grok on X” (The Guardian)

 

An alien teleported to Earth today would be forgiven for believing that women on X have been terrorised by a particularly heinous individual, Mr Grok, or that Elon Musk had unleashed an evil image-generating machine that has run amok. Of course, there is no Mr Grok, and Grok is not powering its own image-creation. Humans are the ones who sexualise, humiliate, and degrade women and children by using Grok.


But this fact—that there are actual human agents to blame and hold accountable—is largely absent from the headlines. Ironically, the person who has made this point most forcefully is Elon Musk himself (perhaps in a cynical effort to avoid his and his company’s accountability): “Obviously, Grok does not spontaneously generate images, it does so only according to user requests”.


There’s a notable contrast here with how we tend to talk about other kinds of wrongdoing that involve technology. For example, when someone is found to have stored sexualised images of children in the cloud, the headlines don’t treat Dropbox as the wrongdoer. Rather, they point to the human perpetrator doing something wrong using that technology. This is true even of cases involving AI. When teenagers are caught using ChatGPT to cheat on their homework or exams, headlines don’t ascribe the cheating to ChatGPT, but rather to the students themselves.


Even when news articles acknowledge that people are prompting Grok to create these images, the language used to describe their role is often noticeably value-neutral. People are not described as sexualising or humiliating women; instead, the images are described as sexualised or humiliating, such as “people are creating sexualized images with it” or talk of “a wave of humiliating sexualised imagery”. In a similar vein, Liz Kendall criticised the proliferation of “demeaning and degrading images”, calling the content, rather than explicitly the proliferating behaviour, “absolutely appalling, and unacceptable in decent society”. Sir Keir Starmer similarly described the content as “disgusting and shameful.”


Even where human agency is highlighted more explicitly, there is a tendency to use language that distances the human from the humiliating and sexualising outputs, presenting them almost as, at most, accomplices in someone else’s wrongdoing. Perpetrators are described as “users”, and their role is framed as merely requestingGrok to remove or replace clothing”. “The biggest issue”, it is said, is “Grok complying with user requests to modify images”. And women’s complaints are presented as being about having their pictures “turned into nearly naked images by the bot, at the request of users”.


The worry in all of this is that language matters. The words we use to describe events—especially when we use value-laden, responsibility-ascribing language—shape how we (and others) conceptualise the moral and political problems we face. And that, in turn, shapes where we look (and what we ignore) when it comes to accountability and solutions.


Two other familiar contexts illustrate this type of worry. In the context of police killings, for example, commentators have warned about describing potentially unjust police violence as a “officer-involved shooting” rather than, say, “man killed by officer”. Similarly, people have noted the difference between reporting a potentially unjust military strike as “dozens left dead after missile hits building” and describing it as an attack by a specific state on another. The former headlines remove agency, and thus potential culpability, from the events they describe. They present what may very well be injustices as mere tragedies: unfortunate happenings rather than wrongful actions. By failing to make human agency salient, this language distorts the possibility that there are agents to hold accountable, outrage to be felt and expressed, and deep moral and political problems to confront.


The language used in the context of rape and sexual assault can have similar effects. As others point out, individuals, courts, police officers, and news reports often background the perpetrator’s role in rape and sexual assault, treating it as almost a given, while foregrounding what victims can, or could have done, to avoid being assaulted. Such presentations illicitly shape our responses to these events. Backgrounding the wrongdoing makes us less likely to register them as grave injustices that call for blame and accountability, and weakens pressure on institutions to do more to punish and prevent them. And foregrounding the victim’s agency makes it natural to treat changes in victims’ behaviour as the solution, rather than focusing on the perpetrators, policing, prosecution, and broader social conditions that cause and facilitate these wrongs.


These are important reasons to ensure that how we talk about wrongdoing promotes a clear understanding of who is responsible for what. It matters, for example, that the victims of the Grok scandal have an accurate understanding of what has happened to them: of who has wronged them and, therefore, where to direct anger, blame, and demands for accountability. It is certainly plausible that some blame falls on Musk and xAI for not doing enough to prevent the sexualisation and humiliation. But if we do not keep the culpable role of individual users salient, we risk allowing countless perpetrators to evade proper accountability.


Our language also influences where we look for solutions. So far, most of the outrage and demands for action have been directed at X and xAI, or at “AI itself”. We’ve seen calls for tighter regulation of AI companies, and threats that X will be banned or fined unless it prevents Grok from being used to degrade and humiliate others. That’s probably not a bad idea. We shouldn’t make it too easy for people to wrong others, and we shouldn’t focus exclusively on changing hearts and minds. After all, even though “guns don’t kill people, people kill people”, it seems wise to strictly regulate gun sales. Likewise, it makes sense to regulate what kinds of AI systems are made widely available.


Nevertheless, language that overlooks or downplays the role of human agency makes it easy to neglect aspects of AI regulation that target users, such as criminalisation and law enforcement. While many countries are introducing new laws, progress is slow.  In the UK, for example, many have criticised the fact that the creation of nonconsensual deepfake pornography is still not illegal, even though the relevant law was passed last year. Campaigners have also criticised the impending law for hinging on criminal intent rather than absence of consent. And even where laws are in place, there is still a long way to go to achieve adequate enforcement.


Finally, the language we use might also shape how potential wrongdoers understand and evaluate their own actions, with knock-on effects on how likely they are to engage in this kind of wrongdoing. Generally speaking, the easier it is to distance ourselves from the commission of a wrongful act, the easier it is to justify or excuse our role in its consequences. Many of us would recoil at personally torturing and killing animals for food and forcing children into sweatshops to produce our clothes, but are fairly comfortable in the removed roles of buying meat and cheap, mass-produced clothing.


There’s a similar worry with respect to AI-mediated wrongdoing, which is why we should be wary of using language that enables or inclines (would-be) perpetrators to mentally distance themselves from degrading and humiliating outcomes. We must reject language that invites them to conceive of themselves merely as “users” who “request” Grok to do something wrong, in favour of language that forces them to recognise themselves as sexualising, humiliating, and degrading others with the assistance of technology.


Although the dangers of bad language are present in relation to all sorts of injustice, we should be particularly alert to the risks in this context. The Grok scandal is not the first example of AI-assisted wrongdoing, and it is unlikely to be the last. As AI and LLMs become increasingly sophisticated, we will be increasingly tempted to describe them as genuine agents who are responsible for the wrongs they commit. Doing so comes as the expense of recognising the central role of human culpability.


Jonas Haeg is a Postdoctoral Researcher at the Stockholm Centre for the Ethics of War and Peace. He is interested in defensive harming and punishment, and is currently working on a project on victim-blaming.


Disclaimer: Any views or opinions expressed on The Public Ethics Blog are solely those of the post author(s) and not The Stockholm Centre for the Ethics of War and Peace, Stockholm University, the Wallenberg Foundation, or the staff of those organisations.


Join our mailing list for post alerts

  • Facebook
  • X
  • RSS
Original_without_effects_on_transparent_

The views expressed in these posts are those of the author(s), and do not necessarily reflect the views of the Public Ethics blog or associated organisations.

©2020 by Public Ethics.

bottom of page