Business

TikTok cuts threaten hundreds of UK content moderator jobs amid AI shift

Hundreds of UK jobs are at risk after TikTok confirmed plans to restructure its content moderation operations and shift work to other parts of Europe.

The social media giant, which has more than a billion users worldwide, said the move is part of a global reorganisation of its Trust and Safety division and reflects its growing reliance on artificial intelligence (AI) for moderating content.

A TikTok spokesperson said: “We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally.”

The Communication Workers Union (CWU) condemned the decision, accusing TikTok of “putting corporate greed over the safety of workers and the public”.

John Chadfield, CWU National Officer for Tech, said: “TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives.”

He added that the announcement comes “just as the company’s workers are about to vote on having their union recognised”.

TikTok defended the cuts, arguing the changes would improve “effectiveness and speed” while reducing the amount of distressing content human reviewers are exposed to. The company said 85 per cent of rule-breaking posts are already removed automatically by AI systems.

Affected staff in London’s Trust and Safety team – alongside hundreds more across Asia – will be allowed to apply for other roles within TikTok and will be given priority if they meet the minimum requirements.

The restructuring comes as the UK tightens oversight of social media platforms. The Online Safety Act, which came into force in July, imposes stricter requirements on tech companies to protect users and verify age, with fines of up to 10 per cent of global turnover for non-compliance.

TikTok has introduced new parental controls, including the ability to block specific accounts and monitor older teenagers’ privacy settings. But the firm continues to face criticism over child safety and data practices. In March, the UK’s data watchdog launched a “major investigation” into the platform.

TikTok said its recommender systems operate under “strict and comprehensive measures that protect the privacy and safety of teens”.

The cuts highlight the growing tension between efficiency and safety in the moderation of online content. While AI allows platforms to process huge volumes of posts at scale, critics argue that human oversight remains essential to capture context, nuance and emerging harms.

For TikTok, the gamble comes at a sensitive time. With regulators intensifying scrutiny and unions organising inside the company, the decision to reduce human moderation risks reigniting questions about whether technology alone can keep users safe.

Read more:
TikTok cuts threaten hundreds of UK content moderator jobs amid AI shift