tiktok replaces human moderators

TikTok is cutting hundreds of content moderation jobs in the UK as the company switches to artificial intelligence systems. The social media platform’s London office will stop doing moderation and quality assurance work. These jobs will move to other European offices and outside companies.

TikTok eliminates UK moderation roles as AI takes over content review, shifting remaining work to European offices and external contractors.

The affected workers are part of TikTok‘s trust and safety teams. They’re among the company’s 2,500 employees in the UK. TikTok has held town-hall meetings with staff to discuss the changes. The company says over 85% of content that breaks its rules is already removed by automated technology.

This restructuring comes as the UK’s Online Safety Act starts taking effect. The new law requires social media companies to protect children and remove illegal content quickly. Companies that don’t follow the rules could face fines of £18 million or 10% of their global income. Platforms must also add age checks for content that could harm young users. Over 468,000 signatures have been collected on a petition calling for the Act’s repeal, showing public concerns about its broad reach.

TikTok says AI helps meet these tough requirements. The automated systems can process 20 times more content than human moderators. They’ve also reduced workers’ exposure to disturbing content by 60%. This means less psychological stress for the remaining staff. The changes are part of a broader global restructuring that also affects moderator jobs in Malaysia and other parts of Southeast Asia.

But critics worry about the downsides. The Communication Workers Union says AI systems aren’t ready to replace humans completely. They point out that machines can’t understand context the way people do. AI might wrongly remove acceptable content or miss harmful posts that need human judgment. Recent reports indicate that data breaches have affected 77% of companies implementing AI systems, raising additional security concerns.

TikTok isn’t alone in making this shift. Social media companies worldwide are turning to AI moderation to save money and work faster. The content moderation market is expected to grow by 10.7% each year through 2027.

Some experts warn that relying too much on AI could hurt user safety. They say it might damage trust in social media platforms, especially among British users who care about these issues. The UK government continues to watch how companies handle content moderation and data privacy.

While AI brings speed and efficiency, the loss of human oversight raises questions about whether machines can truly keep online spaces safe.

References

You May Also Like

AI Now Judges Federal Workers’ Fate: Musk’s DOGE Sparks Government Purge

Musk’s AI judges decide government workers’ job fates as DOGE eliminates 25,000 positions. Can anyone survive the weekly justification emails?

Sick of Fake Images? DuckDuckGo’s New Filter Banishes AI-Generated Content

DuckDuckGo declares war on AI images while Google drowns in fake photos. One simple toggle changes everything.

Digital Natives Reject Their Online World: The Youth Internet Rebellion

Gen Z abandons social media as cyberbullying explodes—80% face online threats while teens organize unprecedented digital rebellion against platforms.

Bay Area Residents’ Private COVID Emails Secretly Harvested for AI Training

Private emails sent during COVID are being secretly harvested for AI training without consent. Your pandemic messages to local officials might already fuel tomorrow’s algorithms.