How deepfakes could make Kenyan politics murkier

artificial intelligence

Publishers also believe artificial intelligence remains key to driving innovation in the media

Photo credit: File

Kenya needs to step up efforts to curb the increasing weaponisation of Artificial Intelligence (AI) across social media platforms, otherwise sensitive user data could continue being mined to potentially influence voter decisions in the 2022 General Election using deepfakes, tech experts have warned.

The trend has been witnessed on Facebook, Twitter and WhatsApp, where deepfake videos of Deputy President Dr William Ruto and ODM leader Raila Odinga have been shared to change political narratives, between October last year and March.

 “If uncontrolled, democracy will be smudged,” warns Prof Bitange Ndemo, chair of the Blockchain and Artificial Intelligence taskforce.

All signs indicate that politicians could escalate the situation, by publicly saying words that incite violence, and hide behind deepfakes if summoned by security authorities.

“Nothing stops any politician from lying that the videos of him or her insulting opponents were deepfakes,” Prof Ndemo told the Sunday Nation.

While it could be difficult to identify the persons behind any deepfake, the videos usually spread like wildfire in developing countries where digital literacy is low, and command a huge control of election outcomes.

Sophisticated deepfakes

In previous elections in Kenya, results were manipulated through the Cambridge Analytica scandal where data analytics tools were used to psychologically change the will of the people, and adding sophisticated deepfakes to the mix makes Kenya’s political waters even murkier.

Poorly done clips could be easy to identify as fakes. The lip synching is usually poor and the skin tone patchy. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe.

But done subtly, words can be forced into the mouths of politicians, with a cloned voice and posture.

 “If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said or even doing things they never did, seeing will no longer be believing,” states a report published by the Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative.

The technology, which relies on complex machine learning algorithms and neurals has been banned from Facebook due to its potential to manipulate users.


“Once a political narrative is shifted, it’s almost impossible to bring it back to its original trajectory. We have to inoculate the public before deepfakes affect elections,” says Eileen Donahoe, executive director of the Global Digital Policy Incubator at Stanford University's Cyber Policy Center.

However, deepfakes are not illegal in many countries, and even the Kenya Data Protection Act of 2019 lacks clauses about them, but producers and distributors can be caught in the wrong side of the law. 

Infringe copyright

“Depending on what is shared, a deepfake could infringe copyright, breach data privacy regulations. It can also be slanderous if it damages a politician’s reputation,” says Timothy Orideo, chief executive of Nairobi-based data science training hub, Predictive Analytics Lab.

Andrew Bourne, regional manager for Africa at Zoho Corporation, told the Sunday Nation that video deepfakes are not the only worry in Kenya’s political landscape.

He cautions the public and private sectors against downplaying the hazards of algorithmic political arsenal, which could equally damage an election.

Emphasizing on the urgency to speed up sensory surveillance across different software and devices in the country, Mr Bourne forewarns that audio, deepfakes, which are easier to create, could take centre stage in political misinformation.

“You never know, the devices you use and the apps in your phone have secret codes that enable them to listen to your calls and analyse the environment you are in. They know your voice, face and can even smell and watch what you are doing. They are smarter than you. Making a clone of you is becoming easier,” he remarked.

He highlights that the digital convenience that most Kenyans enjoy comes at a cost, and new forms of online espionage will potentially extract private data through multi-sensory AI techniques available in devices.

Sensory surveillance is best exemplified by secret sensors embedded in smartphones, smart watches, smart TVs, smart fridges, Virtual Reality gears, cars, microchips and smart headsets which allow tech companies to collect data about your vision, smell, hearing, tastes, and even mood, on behalf of powerful politicians, all without your knowledge.

Copy human traits

AI’s ability to copy human traits with accuracy, Mr Bourne apprises, could be used by malicious political opponents to target particular voting groups in their favour.

“Platform owners need to sensitize the public about data privacy, protection and cyber security during every electioneering period,” he advises.

Facebook Inc, which controls over 3 billion users in the world across Facebook, Instagram and WhatsApp, said last week that it will be giving Kenyans the power to have more control over the political ads they see on their feeds.

“When Kenyans use this new feature, they’ll no longer see ads that run with a “paid for by” disclaimer. Political ads play an important role in every election. The feedback received from users was that they wanted the option to see fewer of these ads on their Facebook and Instagram feeds,” said Facebook’s Policy Director for Africa Kojo Boakye.

The country’s Data Commissioner Immaculate Kassait said efforts are being put to create digital political awareness to the public.

“The Office of the Data Protection Commissioner has prioritized measures to ensure that the processing of personal data including the use of AI, 5G, Internet of Things and other new technologies is carried out within the law,” she said.

While noting that these technologies present policy, legal and regulatory challenges, Ms Kassait explained that among the measures her office is taking is allowing the public to report data breaches through an online portal.

“We are implementing a framework for data breach complaint resolution and another one for carrying out periodic system audits to ensure compliance.”

But that, according to Prof Ndemo, is not enough to tame the political madness fanned by AI.

“Misuse of AI could create havoc but IEBC can introduce a rule against abuse of technology in general. Cyber security experts need to be involved too. Voters, candidates and the police should be required to familiarise themselves with the dangers of AI,” he said.

To curb the spread of AI-generated fake online content, which could in turn create a zero-trust political society, where people cannot distinguish truth from falsehood, Prof Ndemo advises the media to deploy available software to detect and flag them down.

“Many existing AI detection systems are weak but an online public blockchain system that holds an immutable record of videos, pictures and audio could help to detect any manipulations and their origin.”