TikTok’s Union Busting Controversy: A Deep Dive into the Dismissals of UK Moderators
TikTok, the popular social media platform with approximately 30 million monthly users in the UK, is grappling with serious allegations of unfair labor practices following the dismissal of around 400 content moderators in London. The controversy erupted just a week before these workers were set to vote on forming a union, leading to accusations of “oppressive and intimidating” union busting by the affected employees.
What Sparked the Dismissals?
The dismissal of these moderators occurred in late 2025, coinciding with a restructuring effort at TikTok aimed at streamlining operations. The moderators, who are essential to maintaining the platform’s content standards, sought to establish a collective bargaining unit. Their goal was to secure better working conditions and safeguard against the mental and emotional strain linked to reviewing extreme and violent content. This initiative highlights the importance of workplace representation, especially in high-pressure roles.
Allegations of Union Busting
Accusations against TikTok are serious. Moderators have claimed that the mass layoffs before the union vote are clear violations of trade union laws. Employees like John Chadfield from the Communication Workers Union (CWU) have voiced concerns about the oppressive work environment moderators face, describing it as the “most dangerous job on the internet.” Moderators are tasked with reviewing distressing content, including graphic violence and child exploitation, at a rapid pace. Chadfield highlights the need for adequate resources and support to ensure their safety and mental health.
The Legal Battle Ahead
Currently, a legal claim has been filed with an employment tribunal on behalf of three former moderators. The claim insists on holding TikTok accountable for its actions during this dismissal process. TikTok has firmly denied these claims, labeling them as “baseless.” The company attributes the layoffs to a much larger, global restructure, aiming to adapt to the increasing use of AI technologies that assist in moderating content. According to TikTok, 91% of rule-breaking content is now handled automatically, suggesting that the need for human moderators is diminishing.
Amazonia’s Statement on Restructuring
A TikTok spokesperson explained that the changes made are part of a global reorganization intended to enhance user safety through technological advancements. However, this move raises eyebrows, especially as it involves eliminating key safety roles while simultaneously citing enhancements in AI. Critics argue that cutting back on human moderators may create new risks instead of alleviating them, especially for a platform that caters to a young audience.
The Human Cost of Content Moderation
Amid these structural changes, the human cost cannot be overstated. Rosa Curling of the tech justice nonprofit Foxglove characterizes TikTok’s actions as “appalling.” Curling argues that by terminating essential safety workers, TikTok not only jeopardizes user safety but also undermines the wellbeing of those tasked with keeping the platform secure. This raises crucial ethical questions about the priorities of tech giants when it comes to balancing profit and safety.
AI: A Double-Edged Sword?
While TikTok claims that its increased reliance on AI has resulted in a 76% reduction in moderators’ exposure to graphic content in the last year, this reliance could be seen as a double-edged sword. Critics assert that while AI can manage content at scale, the nuances and complexities involved in human judgement remain irreplaceable. This is particularly true when it comes to the subtleties of content that may not be easily categorized or understood by algorithms.
Standing Up to Big Tech
Legal experts like Michael Newman from the law firm Leigh Day believe that this ongoing dispute serves as a vital reminder of the power of collective action. As these moderators come together to fight for their rights, the case underscores the notion that individuals can stand up against the overwhelming influence of big tech companies. It raises essential conversations about the implications of cost-saving measures, especially when they have significant impacts on workplace safety and mental health.
The situation at TikTok isn’t just a labor dispute; it’s a window into the broader implications of technology and its intersection with human rights at work. As these moderators seek justice, the world watches closely to see how both TikTok and the legal system will respond to their claims. The outcome may redefine how technology companies address labor rights and ethics in an increasingly automated world.
Inspired by: Source

