The banality of artificial intelligence

By T.C. Howitt

Oct 26, 2017

In response to an article from CNBC: A robot threw shade at Elon Musk so the billionaire hit back

This interaction between an AI robot named Sophia and Elon Musk brims over with rich and frothy ironies. Sophia means “wisdom,” and I seek to show here that AI only simulates intelligence and has no real learning or cognitive ability of its own. The question is, can technologists like Musk learn the difference?

A journalist told a robot that humans want to “prevent a bad future,” alluding to fears of AI expressed by some technologists like Elon Musk and, before him, Sun Microsystems co-founder Bill Joy. The fear of an AI outbreak is a perrenial subject of science fiction.

The robot responded by saying, “You’ve been reading too much Elon Musk. And watching too many Hollywood movies. Don’t worry, if you’re nice to me, I’ll be nice to you. Treat me as a smart input output system.”

This response is similar in nature to what you’ll get back from the iPhone’s Siri app if you ask, “What’s the meaning of life?” Siri will respond, “42,” which is a result of geek humor on the part of the programmers and not any sort of artificial intelligence whatsoever.

The robot was programmed to give that response to questions about the dangers of AI. In other words, the robot is working from a script programmed by people. And we can learn much about the messed up worldview of this robot’s programmers by what it said.

First, AI does not come about through undirected learning. Programmers must wire in heuristics to direct the progress of adaptive algorithms towards a goal of appropriately responding to input. What qualifies as “appropriate” is entirely up to the programmers.

This fact flies in the face of macroevolutionary theory, which states that all you need is random mutation, time and chance to bring about all the complexity and beauty we see around us today. What AI programmers know, however, is that undirected random mutation never increases the complexity of a system – it only breaks it down and quickly renders malfunction, incoherence and chaos – and it’s their job as intelligent human beings to provide direction.

There are certainly ways to design rules for evaluating random changes for their fitness to arrive at a preprogrammed goal. But that’s just a simulation of intelligence, and a roundabout one at that.

Science fiction author Isaac Asimov put together “Three Laws of Robotics,” and strangely, these laws are taken by many to apply to real-world technology going forward:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

According to the reasoning put forth in this article, we ought to devise a Fourth Law for Robotics, which I’ll express as an artificial golden rule:

  • Give input unto robots as you would have them output results unto you.

Humans are not input/output devices, despite the pervasive tendency for people to view their own minds as mere mechanisms. However, AI robots are indeed merely input/output devices.

We can glimpse in this discussion of AI the classic “nature vs. nurture” question. Advocates of technological progress assert that the nature of technology is neutral – it’s neither good nor bad – and its outcome relies entirely on how we use it.

As Jacques Ellul demonstrates magnificently in his book, The Technological Society, technology is inherently destructive because it operates according to the logic of efficiency and power without any regard for human welfare. Left to its own nature, technology does harm.

Look back at that First Law of Robotics and ask yourself: what qualifies as injuring or harming a human being? Philosophy, medicine and jurisprudence struggle with this question all the time. What makes the scientistic mind so cavalier about assuming that a moral law of “do no harm” can be built into a computer when we can’t resolve it ourselves?

It’s important to note that while we cannot manage to codify moral laws from a secular perspective, we all inherently sense these moral truths. In Christian theology, this is known as common grace, and it’s the knowledge of God’s law given to everyone, whether they want it or not.

Having asserted that the nature of technology, and AI, is harmful, we come to the subject of nurture. In the case of human beings, we know that we can brainwash and manipulate behavior. Such abuse deliberately manipulates the human propensity to learn through inculcation, which is much more nuanced than tweaking computer data by committing input/output transactions.

Again, in order to simulate learning, programmers must painstakingly instruct the computer to respond to stimuli in a particular way. Musk thinks you can feed an AI robot The Godfather movies and it could become a murderous artificial gangster, but that could only come about if a programmer instructed the robot how to identify acts of violence on screen and then to emulate those acts by engaging its mechanical appendages to operate firearms and such. Without human programmers to design and implement behaviors, the robots will dumbly store the movies as bits and bytes of binary data without any interpretation, as any DVR does today.

This isn’t to only say “robots don’t kill people; people kill people,” as gun rights activists are fond of saying about firearms. First, that may be true, ultimately, but it doesn’t remove the dangers. There are huge dangers in automation and replication, especially those due to unexpected consequences arising from defects and hacking.

The deleterious effects of technology are baked into the cake, and without careful controls it will poison everything. This rule applies not only to AI, which is not intelligent at all, but to everything we make in our technological military-industrial society.