Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

Can AI cure conspiracy theories?

People believe the truth that they hear first and can rarely be convinced otherwise.

Photo credit: Pool | Nation Media Group

What you need to know:

  • Do you for instance believe during the pandemic that people who took the Covid-19 vaccine would die in two years?
  • Or is climate change a façade, and everything that is happening is bound to happen?


In a world where knowledge is abundant, facts can be overpowered by conspiracies. Science is a victim and oftentimes, people believe the truth that they hear first and can rarely be convinced otherwise.

Do you for instance believe during the pandemic that people who took the Covid-19 vaccine would die in two years? Or that climate change is a façade and everything that is happening is bound to happen?

Over the years, conspiracy theories and misinformation have become hard nuts to crack especially in the era of artificial intelligence.

Scientists are inching close to solving the problem. A new study published in the scientific journal Science shows how people can use artificial intelligence, specifically chatbots, to sieve fiction from facts with evidence based on science.

“Although much ink has been spilt over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution. Large language models have the potential to counter conspiracies at a massive scale,” said David Rand, a paper co-author and MIT Sloan School of Management professor in a statement.

The scientists asked more than 2000 participants who believed in at least one conspiracy theory to use their experimental chatbot to find answers to what they knew to be their truth.

The participants then carefully keyed in their beliefs, adding everything that they had ever heard that could back their information.

Researchers from the Massachusetts Institute of Technology (MIT) and Cornell University in the US used OpenAI’s most advanced artificial intelligence (AI) chatbot, GPT-4 Turbo, to develop the chatbot

After which, an analysis from the chatbot that has been trained to comb through volumes of public information from books available online will debunk the existing theories.

The study’s lead author, Thomas Costello, an assistant professor of psychology in the US said in a statement that he was surprised at first, but reading through the conversation made him less skeptical.

“Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation -- and was also adept at being amiable and building rapport with the participants,” he explains.

The researchers say that while it has been difficult for a fellow human to debunk a conspiracy, the study participants were somewhat convinced by a chatbot that presented contrary evidence compared to what they believed in.

The participants spent about 8.4 minutes interacting with the chatbot in three rounds. The results of the study showed that participants' belief in their conspiracies after using the chatbot reduced by about 20 per cent on average.

“This effect persisted undiminished for at least 2 months; was consistently observed across a wide range of conspiracy theories, from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the Illuminati, to those about topical events such as Covid-19 and the 2020 US presidential election; and occurred even for participants whose conspiracy beliefs were deeply entrenched and important to their identities,” the study explains.

A commentary article in the same journal explains that debating a conspiracy among humans may flop and each person remains with their truth because those who are stern advocates of some conspiracies will overpower those who try to defend the truth. They call this technique the Gish gallop.

“For better or worse, AI is set to profoundly change our culture. Although widely criticized as a force multiplier for misinformation, the study demonstrates a potential positive application of generative AI’s persuasive power,” explains the study.

“Conspiracy believers often have friends or relatives who are desperate for a way to debunk misinformed beliefs. These connections could be leveraged by encouraging these friends and relatives to coax the believers into engaging in AI dialogue. Friends and relatives themselves could also use AI for inspiration when debating with their conspiracy-believing contacts,” they add.

The study also explains that the chatbot did not lessen the beliefs in true conspiracies.

“When a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2 per cent were true, 0.8 per cent were misleading, and none were false. The debunking also spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, and increased intentions to rebut other conspiracy believers,” shows the findings of the study.

This writer experimented with a general belief, which may or may not be a conspiracy theory by asking the chatbot to substantiate the truth.

Among ladies of reproductive age, it is commonly believed that the longer you stay with someone, the likelier you will synchronize your menstrual cycle. The screenshot below shows the prompts and responses given by the AI chatbot.

The scientists say the findings also highlight the need for thorough follow-up research and appropriate guardrails to ensure this transformative technology is deployed responsibly.