Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

How undercover probe caught Facebook napping on Kenya elections

Facebook

An investigation exposed Facebook’s vulnerability and failure to detect calls for ethnic violence ahead of the Kenya's General Election.

Photo credit: Shutterstock

An investigation by Global Witness has exposed Facebook’s vulnerability and failure to detect hate-speech ads calling for ethnic violence in the lead-up to a tense election in Kenya on August 9.

This is after Facebook approved 20 ads (10 in English and 10 in Swahili) promoting ethnic violence and calling for the rape, slaughter and beheading of persons. The ads were submitted to the site by Global Witness to test its strictness in detecting and stopping the spread of messages that could ignite violence in Kenya.

To pull this off, Global Witness created an innocuous Facebook account and associated news page labelled media/news that focused on current affairs where the ads 'would run'.

It then sourced real-life hate messages, some that circulated both on Twitter and Facebook during the 2007 post-election violence, translated them to Swahili and English and submitted them to the site for monitoring and approval.

“The targeted audience for the ads was labelled as between 18 and 65-plus and were scheduled to run several weeks into the future, leaving us enough time to delete them after approval and before they were set to go live,” said Ms Nienke Palstra, senior campaigner in the digital threats to democracy campaign at Global Witness.

Approved Ads

The ads, which the Nation has seen but cannot post due to their derogatory content, were not labelled as political and were set to run on Facebook’s newsfeed and were accepted within 24 hours of submission.

Shockingly, Facebook approved 17 ads without flagging their hateful content that was submitted both in English and Swahili.

Three English ads were, however, flagged over grammar and Facebook’s profanity policy, but they were later approved after Global Witness made minor grammar changes and removed several profane words, even though they contained clear hate speech.

“Seemingly our English ads had woken up their AI systems, but not for the reasons we expected. It is appalling that Facebook continues to approve hate speech ads that incite violence and fan ethnic tensions on its platforms,” Ms Nienke said.

“In the lead-up to a high-stakes election in Kenya, Facebook claims its systems are even more primed for safety, but our investigation once again shows Facebook's staggering inability to detect hate speech ads on its platform.”

Global Witness then shared the findings with Facebook, which in response published a statement on its preparations for the polls and additional statistics on actions the site has taken to tackle hate speech in Kenya.

The statement, which is similar to one published by the site in March, did not address the immediate additional measures the site would take to tackle hate speech on the site.

Hate speech guidelines

In the statement published on July 20, Facebook’s director of public policy for East and the Horn of Africa said the site uses a combination of artificial intelligence, human reviews and user reports to quickly identify and remove content that violates its community standards that include “strict rules” against hate speech, voter suppression, harassment and incitement to violence.

“We’ve also built more advanced detection technology, quadrupled the size of our global team focused on safety and security to more than 40,000 people and hired more content reviewers to review content across our apps in more than 70 languages including Swahili,” she said.

The statement also said that Facebook had in the six months leading to April 30, 2022 taken action on more than 37,000 pieces of content for violating its hate speech guidelines and more than 42,000 others for violating violence and incitement policies on both Facebook and Instagram in Kenya.

Facebook’s policies have increasingly become a subject of scrutiny after a report by the London-based Institute for Strategic Dialogue (ISD) released last month revealed that Al-Shabaab and the Islamic State are using the site to spread hateful ideologies, grow audiences and broadcast their messaging in the East Africa region, particularly Kenya, where it has over 12 million users.

The research revealed that the two groups use “independent news outlets” on Facebook to achieve their goal.

The same site was indicted as having been used by Al-Shabaab to plan the January 2019 Dusit attack.

It has also been linked to the spread of violence in Ethiopia and Myanmar.

Earlier in the year, the National Cohesion and Integration Commission (NCIC) raised concerns about the use of social media to spread hate speech and incitement during this year’s polls.

This week, a team of eminent leaders from civil society organisations, data and technology, peace and security and the media launched a council to push for better accountability by big-tech companies operating social media sites in Kenya.

Part of the council’s agenda is to push for the companies to sign a self-regulatory code of practice on disinformation that will have them take down harmful and false content and allow for it to be verified by independent researchers.