
Football rivalry has always carried a dark undercurrent, but the intersection of artificial intelligence and anonymous social media access has opened a troubling new chapter in how that darkness spreads. Over the weekend, Manchester United and Liverpool successfully pressured X to remove a series of posts generated using Grok, the AI tool developed by Elon Musk’s xAI, after anonymous users manipulated the system to produce content mocking some of the most painful events in both clubs’ histories.
The posts referenced the 1958 Munich air disaster, in which 23 people died including eight Manchester United players, the Hillsborough disaster of 1989 that claimed 97 Liverpool supporters’ lives, and the recent death of Liverpool forward Diogo Jota. The content was removed following formal complaints from both clubs, but the episode has reignited an urgent debate about the responsibilities of technology companies when their tools are used to amplify hate.
Tragedy chanting moves from the terraces to the algorithm
The behavior that produced those posts has a long and deeply unpleasant history in English football. Tragedy chanting mocking rival clubs by referencing their worst moments has existed in stadiums for decades, delivered through cruel chants and graffiti directed at fans who lived through genuine loss. What social media has done is strip away the physical context and amplify the reach, allowing the same impulse to spread instantly and anonymously to audiences far beyond any single ground.
Manchester United and Liverpool, as two of English football’s most successful and high profile clubs, are among the most frequent targets of this behavior. The problem became visible enough that in March 2023 then-managers Erik ten Hag and Jürgen Klopp issued a rare joint statement condemning the practice and calling explicitly for it to end. Their combined appeal carried moral weight but limited practical effect the chanting continued, and the digital versions of it kept spreading.
Earlier this year Nottingham Forest felt it necessary to warn their own supporters against tragedy chanting ahead of Liverpool’s visit to their ground. A Liverpool fan received a three year ban from attending matches after being caught chanting about the deaths of two Leeds United supporters. The consequences exist, but they have not proven sufficient to deter the behavior at scale.
What Grok made possible that was harder before
The specific concern raised by the Grok incident is not that AI invented tragedy chanting but that it industrialized it. Where previously a person had to compose and publish offensive content themselves, AI tools can now generate polished, targeted material on demand, lowering the barrier to abuse and allowing anonymous users to produce content at a speed and volume that would otherwise be impossible.
The manipulation of Grok to generate posts about Munich, Hillsborough and Jota’s death demonstrates exactly how that capability can be exploited. The tool did not act independently users directed it toward specific tragedies with specific intent. But the result was content that carried the appearance of coherence and was immediately shareable, making the harm it could cause proportionally greater than a single handwritten chant on a stadium wall.
Government and legal pressure mounts on tech platforms
The U.K. government has responded to the incident with direct language. Liverpool West Derby MP Ian Byrne described the posts as appalling and completely unacceptable, questioning how content of that nature could be generated through a major commercial platform without safeguards in place to prevent it.
The legal framework is also relevant. The Online Safety Act, which came into force in 2023, categorizes the spread of threatening communications as a criminal offense and places obligations on AI services including chatbots to prevent the dissemination of illegal content, including hate speech and material designed to cause distress. A spokesperson for the Department for Science, Innovation and Technology said the posts went against basic standards of British decency and reiterated that technology companies carry legal responsibilities for what their platforms enable.
The broader question for AI development
The Grok episode raises questions that extend well beyond football. As AI tools become more capable and more widely accessible, the potential for them to be directed toward harmful ends grows alongside their legitimate utility. The anonymity that social media already provided to bad actors is now combined with a content generation capability that removes one of the remaining practical barriers the effort required to produce the material itself.
For football clubs, governing bodies and fans, the immediate concern is protecting the memory of genuine tragedies from being weaponized as entertainment. For regulators and technology companies, the question is how to build systems robust enough to distinguish between legitimate use and deliberate exploitation and what accountability looks like when those systems fail.

