tiktok replaces human moderators

TikTok is cutting hundreds of content moderation jobs in the UK as the company switches to artificial intelligence systems. The social media platform’s London office will stop doing moderation and quality assurance work. These jobs will move to other European offices and outside companies.

TikTok eliminates UK moderation roles as AI takes over content review, shifting remaining work to European offices and external contractors.

The affected workers are part of TikTok‘s trust and safety teams. They’re among the company’s 2,500 employees in the UK. TikTok has held town-hall meetings with staff to discuss the changes. The company says over 85% of content that breaks its rules is already removed by automated technology.

This restructuring comes as the UK’s Online Safety Act starts taking effect. The new law requires social media companies to protect children and remove illegal content quickly. Companies that don’t follow the rules could face fines of £18 million or 10% of their global income. Platforms must also add age checks for content that could harm young users. Over 468,000 signatures have been collected on a petition calling for the Act’s repeal, showing public concerns about its broad reach.

TikTok says AI helps meet these tough requirements. The automated systems can process 20 times more content than human moderators. They’ve also reduced workers’ exposure to disturbing content by 60%. This means less psychological stress for the remaining staff. The changes are part of a broader global restructuring that also affects moderator jobs in Malaysia and other parts of Southeast Asia.

But critics worry about the downsides. The Communication Workers Union says AI systems aren’t ready to replace humans completely. They point out that machines can’t understand context the way people do. AI might wrongly remove acceptable content or miss harmful posts that need human judgment. Recent reports indicate that data breaches have affected 77% of companies implementing AI systems, raising additional security concerns.

TikTok isn’t alone in making this shift. Social media companies worldwide are turning to AI moderation to save money and work faster. The content moderation market is expected to grow by 10.7% each year through 2027.

Some experts warn that relying too much on AI could hurt user safety. They say it might damage trust in social media platforms, especially among British users who care about these issues. The UK government continues to watch how companies handle content moderation and data privacy.

While AI brings speed and efficiency, the loss of human oversight raises questions about whether machines can truly keep online spaces safe.

References

You May Also Like

Power Grid Crisis Looms: AI Supercomputers May Consume Japan’s Electricity by 2030

Japan’s AI computing boom could consume the nation’s entire power supply by 2030, mirroring North America’s grid crisis. Can our infrastructure survive the digital revolution?

Watchdogs Condemn Mattel-OpenAI Alliance as ‘Dangerous Experiment’ on Children

Mattel-OpenAI partnership sparks outrage: advocacy groups warn AI toys could replace your child’s real friends forever. The

The Scientific Peril: When AI Models Eclipse Human Judgment

AI may surpass human prediction abilities, but it blindly perpetuates bias while missing crucial ethical context. True scientific progress demands human wisdom alongside machine efficiency.

Your Brain on AI: Cognitive Enhancement or Digital Atrophy?

Is your phone making you dumber? As AI reshapes our cognitive abilities, the line between enhancement and atrophy blurs. Your mental future hangs in the balance.