We need to regulate artificial intelligence before it’s too late 

ChatGPT

This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI.




Photo credit: Lionel Bonaventure | Getty Images | AFP

Philosophers of technology predict that in future, machines will be intelligent enough to take over our lives and turn us into their slaves. This imaginary future is called ‘singularity’. 

And fear is slowly rising about the escalating hold computerised machines and the internet have on our lives. 

But the fear becomes rapidly real with the rise of Artificial Intelligence (AI). 

AI is the imitation of human intelligence by programmed (pre-trained) machines.

And a new AI-based world is furiously unfolding in front of our own eyes. 

This will have a fundamental impact on all dimensions of our lives. There is a need, therefore, for societies to urgently convene discussions on how to control the direction of this new reality before it is too late. 

Many questions are arising: Is AI sowing the seeds of our own destruction?

Is our increasing dependence on the ever-complicated AI technology for good or for bad?

What if computerised machines one day develop consciousness — a life of their own — and become uncontrollable by their human creators?

Did internet inventor, Sir Timothy John Berners-Lee, create a monster that will end reason as we know it, or is the internet an angel to guide humanity to greater heights of self-actualisation? 

Is AI part of our role as co-creators in God’s creation enterprise? Or philosophically, is it humanity’s self-conscious leap (phenomenology)? Or the ultimate test of the individual will (existentialism)? 

Doom

Billionaire Elon Musk, who has ironically helped AI rise, sees only doom. According to Musk, artificial intelligence will trigger the end of the world. 

The importance of AI erupted into the international limelight with the introduction of ChatGPT last November. It is a powerful Chabot. A Chabot is a computer programme designed to imitate a human conversation with people on the internet. You may have noted these “how- may-I help you” chat boxes popping out once when you visit some websites. 

And the world will never be the same again! With appropriate prompts, ChatGPT can write love poems, and academic essays, suggest how to approach a new date, and many more things.

ChatGPT would have written this article in two seconds, which would have invited hellfire from the editor.

Human labour 

The value of human labour is that it requires a certain level of sweating and sometimes pain. This will not change in the foreseeable future.

Ever more powerful AI programmes are popping up. This is exciting and alarming authorities in equal measure. Governments are scratching their heads over policies to put AI under control. 

Overall, the fear of AI is its potential to be used negatively in a number of ways.

Artificial Intelligence is a game changer in education. It fundamentally alters the way we learn. Educationists are worried that AI could make worse the already plummeting standards among students. With AI, cheating in schoolwork will become much easier. 

Currently, some desperate universities are asking students to submit handwritten homework. Existing plagiarism tools cannot detect AI-generated essays. 

There is also the potential to spread misinformation. Fake news is already bad in social media. AI programmes will take this several notches higher. 

With AI, we can have realistic voiceovers and facial make-up. Behavioural modelling and the so-called augmented reality are making it increasingly difficult for ordinary people to differentiate the real from the fake. 

AI can create a copy of me and put hate words on my lips in such a way that it would take highly trained personnel to tell the difference.

Mostly vulnerable will be children, of course. AI makes it much easier for perverts to lure children into danger.

So, what should be done? There is a need for recognition of the dangers of AI as well as its benefits. Policies should require that AI awareness programmes begin as early as possible in school. There is a need to have core courses in the curriculum to equip students with AI skills. 

Commercial adverts

This is already being done in the West. There, universities have core courses in sociology and anthropology of the Internet and advertisement. The objective of these is to teach students to analyse content from the internet and commercial adverts.

Pupils should learn early to unpack subtly packaged lies and half-truths. 

There is also a need for urgent discussions in our universities on how to counter the impending increase of fake term papers that AI will trigger. Already faking it is a major problem in universities. Modestly speaking, only three out of every 10 student essays, theses, or projects are out of honest work. It is a daily struggle for lecturers to control this and most universities seem to have accepted it as normal.

Okay. I asked ChatGPT itself what should be done. Responses included the need for a regulatory framework in international cooperation, ethical guidelines and transparency.

Dr Mbataru teaches public policy at Kenyatta University; [email protected]