Moderators the unsung heroes in quest for safe online content

cybercrime online scammers

Making sure that what is posted online is not harmful to people who see it is a nightmare of a job.

Photo credit: Shutterstock

An estimated 34 million videos are posted on TikTok every day, while 350 million photographs are uploaded on Facebook daily. Who makes sure they are safe to watch? The online space is a limitless sea of material.

Making sure that what is posted is not harmful to people who see it is a nightmare of a job. Facebook and TikTok are only two of numerous online platforms that allow people to post content. It is the job of content moderators to create a safe space for users and ensure all user-generated content posted is legally compliant and culturally appropriate.

Although AI is getting better at parsing meaning from language, automated content moderation misses nuances in language and social context – and offenders continue to find new ways to cheat artificial intelligence every day.

There are many instances where it is not immediately clear whether content is harmful, such as when someone uses slang words that are not yet prohibited. Online platforms are available to people around the world, which means not all harmful or illegal content will be posted in English.

Human content moderators are a small army of online warriors who speak a variety of languages and are able to review and monitor user-generated content looking out for messages and graphics that could be harmful, illegal, spammy, or otherwise inappropriate, which AI filters don’t catch.

I was one such moderator, working out of the Facebook content moderation office in Nairobi until I lost my job in mass firing in March 2023. We are contesting the legality of this mass firing through the courts.

It was the job of me and my colleagues to make sure that all user-generated content on a platform does not pose harm to users. We sifted through all of the words, graphics, images and videos in real time as they were posted onto Facebook, hunting for obscene, illegal, inappropriate, or harmful material.

Content moderators filter material for specific harmful language, like racial, ethnic or sexist slurs, as well as portrayals of violence and pornography. They may have to respond to comments or close out conversations, then report violations to the authorities.

It is time-consuming and backbreaking work, requiring many irregular hours. Moderators must work thoroughly to catch violations hidden within subtle nuance.

We had to see the violence, sexually explicit material, pornography, blood and gore first before everyone else, then make a judgment call on whether or not it ought to go up. We had the responsibility of standing between this torrent of horror and you and your loved ones.

The graphic nature of the material has a traumatic effect on the individuals who act as the gate valve at the end of this constant stream of supply. I was no exception. The role takes a toll on those who perform it. It is not unlike the king’s food taster suffering poisoning. Hazards such as post-traumatic stress disorder are common. Greater attention needs to go towards providing a toolkit of survival skills that mitigate the harm content moderators suffer in the course of our unseen work.

Without appropriate psychological support, these unsung frontline warriors racing against time to sieve the good from the bad in an effort to ensure a safe and better society, can end up becoming the walking wounded. But most urgent is the need to fairly remunerate workers operating in a technological no-man’s land whose industry and effort goes beyond physical national boundaries as we know them – and therefore have not been fully protected by existing legal regimes.

Without content moderators, platforms designed for anything from online children’s education, such as during the Covid-19 lockdowns, to social platforms like Facebook and Tiktok, would turn into a free-for-all of bullying, horrific violence and harassment, making them unsafe for people to use.

If the Internet were to allow hate speech, bullying, and harassment, people would not feel safe using it. Content moderators keep us all safe online, so everyone can feel comfortable and confident while using online platforms.

This is why societies should support deliberate policy and legislative initiatives to provide for minimising harm to content moderators, guaranteeing fair remuneration for their work and offering protection from the hazards of their essential work.


- Mr Nkunzi is a former Facebook content moderator and is the chairperson of the organising committee of the African Content Moderators Union; [email protected]