Culture and the thinking, feeling machines that we are building

Artificial Intelligence

Artificial intelligence AI research of robot and cyborg development for future of people living. 

Photo credit: Shutterstock

What you need to know:

  • Can our computers and other machines think? Can they feel pain, pleasure or comfort? Can they experience sorrow, joy, anger, love or hate?
  • Can our machines be programmed to share our neuro-physical, emotional, intellectual and spiritual experiences?

Irony is when a robot asks you to prove that you are not a robot. Irony, as you know, is the kind of joke in a situation or an utterance that knowingly or unknowingly contradicts itself. Let us, however, first take a close look at “robots” and their intriguing history.

The term “robot” was introduced into European languages by the Czech playwright Karel Čapek. In 1920, Čapek wrote a play popularly known as “R-U-R” or Rossum’s Universal Robots. In the play, an engineering entrepreneur embarks on a project to mass produce machines to perform all those chores that human beings considered too strenuous, unpleasant or tedious for their liking. Things go well for a while, until the robots develop humanlike feelings and tendencies, leading to a confrontation with their makers.

I will not spoil the story for you with details of the plot or its resolution. You will no doubt exercise your reading skills by perusing or browsing through “R-U-R”. When my undergraduate teachers specified this text as required reading on one of my courses, I read it as a matter of routine, although I ended up enjoying it immensely. In recent times, however, the relevance of Karel Čapek’s play to our own times and circumstances has struck me with a particular power, for three main reasons.

First, the strident advocacy of the sciences, with the concomitant dismissal of the arts and humanities, keeps me wondering if we can properly develop on the one foot of “robotic” science while crippling our humanity. Secondly, the phenomenal growth of computer technology and artificial intelligence (AI) poses the real and urgent question of our ability to control our own inventions and ensure our safety and survival. Thirdly, and behind all these musings of mine, is the perennial philosophical question whether we are the only thinking, feeling and caring beings in the universe.

Obviously, we cannot go into all these considerations in one session. I suggest, therefore, that we concentrate only on the second problem, that of our relationship to the mind-boggling technologies that we are developing. We will of course refer to the other two as we go along.

You may have heard that a major ICT company recently sacked one of its employees apparently because he had claimed that the computer at which he was working had become “sentient”. This means that, in his interactions with his computer, he had discovered that it had developed humanlike understanding and feelings. He said that he based his observations on the responses of the computer to the various situations and conversations in which he had engaged it.

We will not go into the rights or wrongs of disciplining a worker for reporting what they had discovered, however unpleasant or inconvenient, in the course of their duties. In any case, few of us can deny the stunning efficiency, versatility and “learning powers” of modern computers and computer-powered machines and their applications. Even more startling are the exponential transformations and capabilities that each new discovery or invention can unleash in the speed, accuracy and proficiency of our machines.

Our ordinary brains and minds are being caught flat-footed and left behind almost on a daily basis by our machines. What was science fiction yesterday, like tours of the galaxies or intimate gazing into the depths of our bodies, are being made elementary realities in our lifetimes. Is it unrealistic to admit that the gadgets that enable all these stupendous achievements are capable of effecting developments and changes within themselves?

Can our computers and other machines think? Can they feel pain, pleasure or comfort? Can they experience sorrow, joy, anger, love or hate? Can they evince pity, compassion or shame? In brief, can our machines be programmed to share our neuro-physical, emotional, intellectual and spiritual experiences? The guy of the “sentient” computer, I hear, was floored by a profound answer, from his machine, about the soul.

Hard-headed pragmatists and empiricists will probably dismiss all that and say that our computers can only give us back from what we have put in them, and according to the commands we give them. In the case of our sentient computer, for example, the empiricist would say that its operator was suffering from what they call the “Eliza effect”. This suggests that the operator had fed the computer with his own thoughts and feelings, and when it gave them back, he mistook them for the computer’s own “thoughts and feelings”.

It may be convenient to accept such explanations and expectations, but it does not remove my fears and concerns. Remember what we said about yesterday’s science fiction being today’s routine reality. Thinking and feeling computers may sound like science fiction today, but they could be some kind of reality tomorrow, or even sooner than that. The solution is not to sack or silence whoever talks about the possibility but to consider realistically how to deal with it for the benefit and survival of our species.

Moreover, we should note that these supers-mart computers are all interconnected. That is what the internet is all about. If one computer in that intricate system goes “sentient”, whether by glitch or design, it will inevitably “sensitise” many, if not all, the others in the system. How can we always remain at least one step ahead of them?

I guess the best approach is for us to refine our humanity as systematically as we can. Even if we accept the “Eliza effect” hypothesis, a positive human input into our computers would not harm us when it is returned to us. But if our human sensibilities are all negative, hateful and destructive, and that is what we feed into our machines, we should not be surprised if what they give back to us is negativity, hate and destruction.

Do you remember the “gigo” (garbage-in-garbage-out) concept we learnt in our earliest classes of computer operation? Avoiding it may be a useful step towards dealing with even sentient computers.
Do you, by the way, ever smile at your computer screen?

Prof Bukenya is a leading East African scholar of English and [email protected]