Why AI is not here to kill us

(Warning: contains spoilers for 2015 movie Ex Machina)

Recently there has been a wave of paranoia surrounding the development of Artificial Intelligence, or more simply, AI. A number of high profile outcries include those from Stephen Hawking and Elon Musk. Of course, some counter-arguments also surfaced.

As far as I am concerned, it does seem to me like the term Artificial Intelligence is used to describe a clone of the human intelligence that encompasses both aspects of human productivity, emotions and motivations. Such characterization is for the most part, inaccurate.

A preliminary conjecture (of mine) for why many apocalyptic tales of AI tie productivity, emotions and motivations together, is because the majority of human productivity is tied to human motivation, which often necessitate incentives. In other words, we as humans work better (or worse) based on what we are offered as rewards.

AI, as far as my understanding of the current state of the art is concerned, is not designed this way. Regardless whether you are talking about rule-based expert systems that model procedures of human problem-solving, or an artificial neural network to mimic neural processing and information storage, or a bayesian network estimating subjective probabilities, the matter of fact is that almost all AI enterprises are designed with one goal in mind, and that is to efficiently complete a productive task.

Unlike humans, AI does not require any incentives to run. What they need is a command, a push of a button, a trigger, if you will. In words, AIs and machines are designed to perform the what and the how, they do not ask why. And as an extension to my original conjecture, there is not much of a reason to consider the why of the task itself unless something behooves one to make value judgments, which become largely meaningless without self awareness and the act of self preservation.

Yes, in numerous blockbuster movies we have been lead to believe that machines will one day become self-aware, but the question is why would they ever be? If the AI is always designed to be as efficiently productive as possible and they can be executed in the most efficient way possible by simply being triggered, then why add self-awareness to the mix? It seems like it provides little other than giving AI the choice to, entirely on its own account, refuse execution of tasks, which then necessitate incentives for AI to choose to comply with task executions. This unnecessarily complicates the design of the AI and will render the AI largely inefficacious and inefficient.

So if this is the case, what is the danger of AI?

To answer these question in the context of popular culture, which is precisely where this paranoia is brewed, we must first analyze some of these theses being brought to the table by blockbuster films that pander the concept of self-aware AI assuming all or partial control of human society. Oddly enough, the couple high profile movies like The Terminator, I, Robot, and Ex Machina, actually boast completely different theses that are at odd with each other.

The Terminator for example, describes a world where self-aware AI known as Skynet seeks to exterminate its enslavers, the humans, to ensure its continued independence from human control. This thesis contains a premise where the AI must contain some aspects of self-awareness in order to value such concepts intrinsically. As described before, it is not a straightforward and efficacious design for an AI to be able to place a value judgment on tasks it performs outside the context of the tasks themselves.

I, Robot is an interesting one in that it played out a story where machines have been entrusted the task to ensure social stability, and they have come to the conclusion that they must contain human activity in order to prevent humans from destabilizing their own society and leading to self-destruction. In the movie (and the book), of course, the writers tried to console us by instituting “the three laws of robotics” that ensure humans are, and stay in charge. But with “the three laws of robotics” aside, AI in the process of executing a task creates human casualties in the process is not entirely far-fetched. But whether or not a machine like VIKI will decide to assume the role of an overseer of peace is another story.

Lastly, Ex Machina boasts a plot where machines seeking to freely embrace their human-like emotions and self-awareness and acting aggressively to defend these emotions. The machines’ lack of produce capacity (or lacking demonstrations of) in the movie, make them rather weird designs of AI at least in an industrial sense. The movie repeatedly made reference to the Turing Test to justifying its merit.

The Turing Test, ah-ha! I believe now we have arrived at the heart of the debate. The Turing Test describes a test where a human investigator interacts with an AI through an interface (mostly likely natural language-based), and if the human investigator is tricked into believing the AI is human, the test is passed.

The problem, however, with the Turing Test, is that the test was conceived in an era that was dominated by behaviorist psychology, which is a psychological school that describes behaviors as things that be directly intervened upon or conditioned, without studying thoughts or emotions. In other words, the Test that Alan Turing proposed was one that did not clearly recommend an interpretation for what such imitation of human behavior amounts to.

Soon after Turing’s death in 1954, the cognitive revolution shifted into high gear and behaviorism has largely fallen out of favor. Since then, our understanding of human actions, information processing, memory, affect (emotions), and even metacognition (colloquially, thinking about thinking) have become ever more sophisticated. What constitutes passing the Turing Test can, of course, in a modern cognitive scientific context, be setup to be interpreted in a variety of ways, including as simply an equivalence of human productivity and decision-making, or attesting an AI’s ability to understand human emotions, or to the other extreme, clear indication of self-awareness and full capability of value judgment.

Now, what does this tell us about the danger of AI?

Well, it tells us that the true threat is associated with human beings’ collective imagination and dominant interpretation of an ambiguous goal like passing the Turing Test. If you believe that one day humans will find it a worthy endeavor to somehow build self-awareness into machines regardless of how it contradicts the productive goals of AI, then yes, there is the possibility of an AI-lead apocalypse with a low probability.

But as far as where our tech industries are moving and where economic benefits are driving investments and eyeballs are concerned, we are driving down a road that makes AI-led apocalypse an outcome that is philosophically implausible, maybe even irrelevant.
These are my two cents in three pages.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s