tiktok replaces human moderators

TikTok is cutting hundreds of content moderation jobs in the UK as the company switches to artificial intelligence systems. The social media platform’s London office will stop doing moderation and quality assurance work. These jobs will move to other European offices and outside companies.

TikTok eliminates UK moderation roles as AI takes over content review, shifting remaining work to European offices and external contractors.

The affected workers are part of TikTok‘s trust and safety teams. They’re among the company’s 2,500 employees in the UK. TikTok has held town-hall meetings with staff to discuss the changes. The company says over 85% of content that breaks its rules is already removed by automated technology.

This restructuring comes as the UK’s Online Safety Act starts taking effect. The new law requires social media companies to protect children and remove illegal content quickly. Companies that don’t follow the rules could face fines of £18 million or 10% of their global income. Platforms must also add age checks for content that could harm young users. Over 468,000 signatures have been collected on a petition calling for the Act’s repeal, showing public concerns about its broad reach.

TikTok says AI helps meet these tough requirements. The automated systems can process 20 times more content than human moderators. They’ve also reduced workers’ exposure to disturbing content by 60%. This means less psychological stress for the remaining staff. The changes are part of a broader global restructuring that also affects moderator jobs in Malaysia and other parts of Southeast Asia.

But critics worry about the downsides. The Communication Workers Union says AI systems aren’t ready to replace humans completely. They point out that machines can’t understand context the way people do. AI might wrongly remove acceptable content or miss harmful posts that need human judgment. Recent reports indicate that data breaches have affected 77% of companies implementing AI systems, raising additional security concerns.

TikTok isn’t alone in making this shift. Social media companies worldwide are turning to AI moderation to save money and work faster. The content moderation market is expected to grow by 10.7% each year through 2027.

Some experts warn that relying too much on AI could hurt user safety. They say it might damage trust in social media platforms, especially among British users who care about these issues. The UK government continues to watch how companies handle content moderation and data privacy.

While AI brings speed and efficiency, the loss of human oversight raises questions about whether machines can truly keep online spaces safe.

References

You May Also Like

The Humbling Truth: Human Brains Outclass AI by 8,000x in Neural Complexity

Your brain uses less power than a dim bulb yet outperforms AI by 8,000x. The environmental cost might terrify you.

AI Job Interviews Silently Discriminate Against Vulnerable Australians, Research Reveals

AI hiring tools silently reject minorities while claiming to reduce bias. Data shows 85% preference for white names, zero preference for Black men. Your resume might be judged by algorithms you can’t challenge.

44 State AGs Warn AI Giants: Stop ‘Predatory AI’ Targeting Children—Or Face Legal Consequences

44 attorneys general threaten AI giants with legal action over predatory practices that target children—while 82% of parents already fear the worst.

Pope Leo XIV Warns: AI Threatens Human Dignity More Than Any Modern Challenge

Is AI stealing our souls? Pope Leo XIV claims artificial intelligence threatens human dignity more than any modern challenge. Your personalized data and agency are at stake. The future of humanity hangs in the balance.