Saturday, April 18, 2015

On the Mistakes of AI Supporters and Detractors

This is a response to a post on Google+ by +Singularity 2045. The original post can currently be viewed here. This post (if no longer available at the time of reading my response below) was based on a link to this page, apparently.

When it comes to pro or con arguments with respect to "machine intelligence," or "AI," the logic on both sides of the argument is frequently lacking.

In the OP Singularity 2045 references a statement of Mark Tegmark, which is presented as:

One thing is certain, and that is that the reason we humans have more power on this planet than tigers is not because we have sharper claws than tigers, or stronger muscles. It’s because we’re smarter. So if we create machines that are smarter than us there’s absolutely no guarantee that we’re going to stay in control,

and Singularity 2045 subsequently critisizes it in the following:

Analogies regarding tigers are only valid if tigers had created the human race via intelligent AI engineering of human brains, or AI design of precursor human brains. The point is our intelligent engineering of AI makes humans utterly different to any unintelligent species below us unable to create higher intelligence.

It is easy to see how, regardless of Tegmark's use of analogy or whether Tegmark is pro or con AI, this statement is possibly true:

If we create machines that are smarter than us there’s absolutely no guarantee that we’re going to stay in control.

The thing here is we need to understand what we mean by "artificial intelligence."

If we merely mean really smart machines that are still machines, that is, they are still entirely constrained by their programing and what we command them to do via that programing, then we can not really "lose control over them" in any meaningful sense.

Things could go wrong with the programing, sure, there could be glitches and bugs, sure, but ultimately by definition, and in virtue of, our programming of these machines, we are always going to be "in control" of them even if our control of them goes sideways due to human error (via glitches and bugs in programming). Put differently, when a machine (as we currently understand them) breaks down or malfunctions or whatever else, our temporary "loss of control" is metaphorical. It's not that we suddenly found ourselves facing a machine that chose to go "out of control," rather, the degree of our control over the machine simply reached a low we would rather it had not.

Put differently still, when we "lose control" over a given machine, we do not suddenly think we are living in the world of Maximum Overdrive.

By means of a simple analogy: we can literally "lose control" of our pet dog, for example, but we cannot literally "lose control" of our pet rock.

However, if by "artificial intelligence" we mean the creation of a machine that can also think for itself, which entails, it can choose how to act and respond to its environment, and is unconstrained by its programing--that is, it is free to alter its programming and is not dependent on our commands--then it is entirely possible that we will "lose control" over it. Indeed, by definition we have little to no control over any "free agent," and if we can exercise control over a "free agent," then such an agent is not truly "free."

Now here's an analogy that is actually workable. We have no idea about other human beings. We think we might "know" them, and we think we might be able to predict their behaviours based on our knowledge of them with respect to previous interactions with them under such and such a circumstance or another.

However, people can be unpredictable. They can surprise us. This is because either:

1) They have something known as "free will" and can choose to act or respond differently with respect to how we predict they might respond, or

2) There is no such thing as "free will," but our universe, in its complexity, is a nonlinear deterministic system, which entails that we can not always predict results; put differently, a nonlinear deterministic system is unpredictable, by definition.

(There is, of course, a third option here in that the universe is a deterministic linear system: this is highly unlikely, in my opinion, but if that is the case, then no one is "controlling" anything, and everything in the universe is merely the playing out of predetermined results based on previous causes, which we could, in theory, anyway, predict if we could figure out what those initial conditions, in fact, were, but this is an aside.)

Now, if we are unable to always and accurately predict the behaviour of other human beings--whose general biological and psychological framework are similar and familiar to our own--then how much less so will we be able to predict the behaviour of a machine intelligence that either has "free will" or is itself able to act, as we do, along the lines of a participant in a nonlinear deterministic system?

In other words, we have a difficult enough time trying to control and predict the behaviours of complex systems (human beings) that are reasonably similar and familiar to ourselves, so there is no reason to think that we can have any greater success in controlling or predicting a machine entity that we have never encountered before.

Therefore, the only logical conclusion to the debate as to whether or not a machine intelligence (of the "free" or "nonlinear" variety) is going to be beneficial to humankind or malevolent is that currently such a question is undecidable.

Logically speaking, we have no good reasons to suppose it is either. Taking a side at this point in time (where we have ZERO experience with such a "machine intelligence") is simply speculative navel gazing based on our own particular prejudices and biases. We have no actual empirical evidence on which to base our conclusions.

We can guess and speculate all we want, but until such an entity is actually in the world, there is nothing upon which to ground our guesses and speculations. It is exactly like trying to decide if alien visitors to the planet would be a boon or a misfortune to humankind: until we actually have alien visitors (assuming we have none already and assuming that if we do, their impact on the world has been hidden from us and, therefore, most of us have no way to adequately assess the matter), there is simply no way to predict the outcome.

No comments:

Post a Comment