Why TikTok Is Laying Off Its Employees? – A Good or A Bad move

Recently, social media topper TikTok, owned by ByteDance, has again surfaced in the news regarding Why TikTok Is Laying Off Its Employees.

Currently, ByteDance has more than a lakh employees available in most of the countries (except India). This time, TikTok has announced the layoffs of its employees. As per the reports, this has happened for around 400-500 TitkTok employees in the Malaysian region, and no other regions, like the United States, are affected. 

In the second quarter of 2024, TikTok removed around 250 employees in Ireland and, this time, in Malaysia in different departments.

The TikTok employees were notified about their termination on 9th October 2024 via email.

Why TikTok Is Laying Off Its Employees

Why TikTok Is Laying Off Its Employees?

 

Why TikTok Is Laying Off Its Employees

 

The simple reason for TikTok Laying Off Its Employees is to increase the use of AI for content moderation. TikTok wants to review and remove the social media platform content that doesn’t adhere to the company and standard social guidelines, like inappropriate or illegal content that is harmful to anyone.

TikTok is focussing on making its moderation policy better and AI-driven, thus reducing the company cost with better results. This move will replace manual processes with automated systems. As the technology nowadays is more driven by AI, TikTok has to be at par with that and can not lag. This will increase TikTok revenue somehow and with less errors.

TikTok has already removed the majority of the content that violates the community guidelines.

Another major reason for TikTok to take this decision is that the Government in Malaysia was emphasising more strict policies on social media. This was in response to global demands for better content management and compliance with cybersecurity regulations. In recent years, there has been an increase in cases of harmful phishing, cyber crimes and inappropriate content being published on TikTok. The Malaysian Government had no other way to curb this menace and hence mandated this.

You may also like: Best Features of remaker AI

How can AI help in content moderation?

Yes, AI can help improving content moderation in several ways in a better way than manual processes by offering better speed and accuracy that human moderators alone cannot achieve. Here are some key ways AI can help:

1. Real-Time Detection of Harmful Content

AI can scan a lot of data, including text, images and videos. AI can do this in real-time with more accuracy in less time. Isn’t that interesting?

AI can detect and remove harmful content like violence, aggressive speeches, fishing or any kind of cyber threat content, vulgar images and inappropriate videos. AI has been trained, and it’s still learning to mark specific keywords, images and patterns as red flags and block those.

2. Scalability

What happens when there is a surge of bad content? In that time, it will be difficult for human moderators to check and remove that content before that content goes viral and reaches all across the globe. 

AI can process a huge number of posts, comments, images, videos, etc and sanitise them. This kind of scalability is very crucial for large platforms like TikTok, Facebook, or YouTube, where user content is continuously being uploaded.

3. Consistency and Reduced Bias

Since humans have some kind of biases in their nature and could deviate from the actual policy guidelines, AI, which doesn’t have any personal biases, will stick to the rules it has been fed with and will not deviate, resulting in better rule-specific content moderation. 

4. Language Processing

TikTok, or any other social media platform like Facebook or YouTube, has millions of users across the globe. The users understand different languages and upload content in different languages. This can be challenging for the humans who don’t understand the language. 

But AI has built-in advanced NLP (Natural Language Processing) and can easily moderate the content across different languages and dialects, helping platforms moderate globally. With NLP, AI can understand any language, specific slang, and emojis, making it more effective at detecting inappropriate content in various forms.

5. Preventing Disinformation

Many times, the human detectors are not aware if the news is real or fake and could make mistakes.

But with AI, this problem is also solved. AI can easily detect patterns associated with misinformation and fake news by comparing them with reliable sources which have higher authority. This helps to flag and remove misleading content to alert users and reduce the spread of disinformation.

6. Assisting Human Moderators

AI can be used along with the human moderators. AI can perform initial content analysis and filtering, and then the smart humans can focus on the content, which requires a more detailed understanding of context.

7. Continuous Learning and Adaptation

With humans, we have a problem of stagnant knowledge. Most Humans learn new things, and when they are in their comfort zone, they stop there and don’t do any further learning and improve their skills. 

But AI keeps learning. They can be trained to adapt to new forms of harmful content and behaviours as soon as they arrive, staying one step ahead of malicious content publishers. 

Instead of removing the employees, big tech giants can try to combine the two super forces – Humans and AI to manage the content efficiently by engaging AI in the initial analysis and then humans for more details and a deep understanding of the content. This will help in avoiding any errors and judgment.

Leave a Comment