TikTok is shaking up its operations, and the move could mean hundreds of people in the UK may soon be out of a job.
The company, which employs more than 2,500 people in Britain, is restructuring its trust and safety department and shifting more of its work onto artificial intelligence.
The restructuring is part of a wider global plan that doesn’t just affect the UK but also stretches across South and South-East Asia.
While some roles will remain in the UK, many will be shifted to other offices in Europe or outsourced to third-party providers.
London Office Moves Ahead Despite Cuts
Interestingly, the shake-up comes as TikTok continues to grow its physical presence in London.
Its UK headquarters is currently based in Farringdon, but the company is preparing to move into a new office in Barbican next year.
A company spokesperson explained that this restructuring builds on changes that began last year, saying the goal is to “strengthen our global operating model for trust and safety” while concentrating resources in fewer places to make operations more effective and faster.
The Rise of AI in Content Moderation
At the heart of these changes is TikTok’s heavy reliance on AI.
The platform has said that more than 85% of posts that break its rules are now removed automatically, without a human moderator ever seeing them.
According to TikTok, this use of automation also helps reduce the amount of distressing or graphic material that human staff are exposed to.
However, critics argue that while AI is fast, it may not be reliable enough to replace human judgment in moderation.
Union Raises Alarm
The Communication Workers Union has reacted strongly, warning that cutting human moderation teams in favor of AI could put millions of British users at risk.
A spokesperson said workers have repeatedly raised concerns about the dangers of relying too heavily on “immature AI alternatives” and that these issues have been a constant worry during attempts by TikTok employees to unionize.
Renewed Fears After Molly Russell’s Death
This debate is happening against the backdrop of renewed warnings about harmful content on social media.
The Molly Rose Foundation, set up after 14-year-old Molly Russell tragically took her own life in 2017 after viewing self-harm content online, recently published fresh research.
The study revealed that TikTok and Instagram are still recommending suicide and self-harm content to teenagers “on an industrial scale.”
It found that accounts set up to look like they belonged to 15-year-old girls were being bombarded with content about depression and self-harm.
Calls for Action from Government
The foundation’s chairman, Ian Russell, described the findings as “staggering,” saying that harmful content similar to what Molly saw before her death remains widespread across social media platforms eight years later.
He called on the Prime Minister to step in and introduce tougher measures to stop what he described as “preventable harm.”
The new report, titled Pervasive-by-design, claims that social media companies are “gaming Ofcom’s new rules” under the Online Safety Act, which came into effect recently.
The foundation says not much has changed since its previous research in 2023—in fact, in some cases, things may have gotten worse.
TikTok Pushes Back
TikTok has strongly rejected the claims.
The company insists it has made major safety improvements and strictly bans any content that promotes suicide or self-harm.
A spokesperson argued that the foundation’s findings don’t reflect the “real experience” of most people on the app, stressing that over 99% of harmful content is proactively taken down before it spreads.