TLDRs;
- TikTok to replace 150 Berlin-based trust and safety staff with AI and outsourced contract workers.
- Layoffs represent nearly 40% of TikTok’s Berlin workforce, sparking strikes and union demands for severance talks.
- Move follows similar cuts in tahe Netherlands and Malaysia as TikTok expands AI moderation systems.
- Experts warn AI still struggles with accuracy, especially in moderating non-English and culturally nuanced content.
TikTok’s decision to lay off its entire Berlin trust and safety team has sent shockwaves through the company’s German operations.
The 150 employees, tasked with moderating harmful content for German-speaking users, will see their jobs replaced by a combination of AI moderation systems and outsourced contractors.
With around 400 employees in Berlin, the layoffs represent nearly 40% of TikTok’s local workforce. The move follows a similar pattern seen in the Netherlands and Malaysia, where hundreds of moderation jobs were eliminated in the past year.
The workers, represented by the German union ver.di, have begun striking in protest. Union leaders are demanding negotiations for severance pay and extended notice periods. However, TikTok has so far declined to enter formal talks, citing operational restructuring.
A Growing Global Trend
TikTok’s Berlin decision is part of a much wider industry shift. Major social media platforms, including Meta, X, and Google, have been steadily replacing human moderators with AI systems.
In the past year alone, TikTok cut 300 moderators in the Netherlands and 500 in Malaysia. Meta announced plans to replace large portions of its product review staff with automated tools, while industry-wide downsizing has led to thousands of trust and safety positions being eliminated globally.
Companies often frame these changes as efficiency measures, pointing to AI’s speed and scalability in processing large volumes of content. However, labor advocates say cost-cutting is a significant driver.
Accuracy and Safety Concerns
Despite the rapid adoption of AI, experts remain concerned about the technology’s limitations. Studies suggest AI moderation tools have an error rate of 5–10% for identifying harmful content. This rate jumps significantly when dealing with non-English languages, where accuracy can drop by as much as 30%.
Real-world examples highlight the risks, in some cases, TikTok’s automated systems have mistakenly flagged harmless videos, such as those featuring Pride flags, while allowing actual harmful material to slip through.
Human moderators, who can review up to 1,000 videos a day with cultural and contextual understanding, remain more effective at handling nuanced or sophisticated policy violations.
The Future of Content Moderation
The Berlin layoffs underscore a pivotal moment for the social media industry. While automation can drastically reduce costs and scale operations, its inability to fully replicate human judgment raises questions about long-term impacts on platform safety and trust.
Analysts believe the current wave of AI adoption in moderation is less about temporary restructuring and more about a structural shift in how platforms operate. The challenge will be balancing efficiency with accuracy, a trade-off that could affect billions of social media users worldwide.