Artificial Intelligence; Do We really Want to Build These
Things?
In
my previous post, (On Intelligence, Part One) I reviewed the book by Jeff Hawkins in which he describes
the ongoing effort to discover the operating algorhythm of the human neocortex.
This principle, if we were to
discover it, would allow the construction of artificially intelligent machines
with capabilities that would far exceed any computer today---or any human. With such a machine, many of the world most
insoluble problems could be quickly solved.
But
not everyone agrees that building such a machine would be a good idea. Stephen Hawking says, "The
development of full artificial intelligence (AI) could spell the
end of the human race." He
says that the primitive forms of AI developed
so far are very useful. But he fears creating something that could match or
surpass humans. He says, "It
would take off on its own, and re-design itself at an ever increasing rate. But humans, who are limited by slow
biological evolution, could not compete and would be superseded."
Elon
Musk considers AI the most serious
threat to the survival of the human race. He says we may be "summoning a
demon" which we cannot control. He himself has invested in AI projects, but only as a way of keeping an eye on what's going
on.
But
Jeff Hawkins, in his last chapter, explains why he thinks we need not fear this
technology. So we have Mr.
Hawking on one side of this argument, and Mr. Hawkins on the other. If you want to explore this
argument in more depth, you can Google AI FOOM, and get a series of debates
sponsored by Machine Intelligence Research Institute and featuring the views of
economist Robin Hanson on one side and theorist Eliezer Yudkowsky on the
other. Mind you, this is not one
debate but a series of debates, and if you downloaded the whole thing, it would
be the length of a major novel. I have only briefly glanced at this opus, and I
do not plan to go into it that deeply. But I have, nonetheless, taken
sides. It was reading Mr. Hawkins own
arguments as to why we shouldn't fear this technology that convinced me that we
probably should.
As
Yogi Berra says, "Making predictions is tricky, especially about the future." Hawkins reminds us that no one
can really predict the scope of a new technology, or what its most important applications will ultimately
become. In the early stages, any new technology is used only as a replacement
for the old technology----cars replaced the horse and buggy, the telephone
replaced the telegraph, and the transistor, in its first generation, just
replaced the vacuum tube. But
eventually these things all found uses that could not have been dreamed of in
terms of the old technology. And Hawkins says we would be foolish to suppose
that we can even imagine all the places that this road will take us, should we
choose to follow it. I'm sure he's right.
But there is one thing we can be certain of: While the use of the new, intelligent computers would
not be limited to the uses of the old computers, it would certainly include those
uses. And that alone should
frighten you.
I
have never thought of myself as a Luddite. In fact, in my career as an industrial electrician, I spent
40 years automating my friends and neighbors out of a job. Of course, perhaps because I spent 40
years automating my friends and neighbors out of a job, the term
"Luddite" is not always a dirty word to me.
Hawkins says that for over a hundred years, popular fiction has talked about robots-- some menacing, some lovable, and some just funny. And this has made some of us fearful of robots. And our worst fear would be of self-replicating robots. He assures us that we need not fear this because intelligent machines need not be self-replicating. Computers cannot replicate themselves. (I'll come back to that question later). He also considers our fear that the very existence of AI computers might menace the whole world's population the way that nuclear weapons now do. And he also allows that, even if they are not directly menacing, we might reasonably fear that they could super-empower small groups of very malevolent individuals.
Hawkins says that for over a hundred years, popular fiction has talked about robots-- some menacing, some lovable, and some just funny. And this has made some of us fearful of robots. And our worst fear would be of self-replicating robots. He assures us that we need not fear this because intelligent machines need not be self-replicating. Computers cannot replicate themselves. (I'll come back to that question later). He also considers our fear that the very existence of AI computers might menace the whole world's population the way that nuclear weapons now do. And he also allows that, even if they are not directly menacing, we might reasonably fear that they could super-empower small groups of very malevolent individuals.
As
to whether machines using the human brain algorhythm could be malevolent,
Hawkins give us a flat "no." He Says, "Some
people assume that being intelligent is basically the same as having a human
mentality. They fear that
intelligent machines will resent being "enslaved," because humans
resent being enslaved. They fear
that intelligent machines will try to take over the world because intelligent
people throughout history have tried to take over the world. But these fears rest on a false
analogy." He goes
on to assert that intelligent machines would not share the emotional drives of
the old brain. They would be
free of fear, paranoia, and desire, they would not desire social recognition,
and they would have no appetites, addictions, or mood disorders. What evidence does Hawkins
offer in support of this assertion?
None whatsoever. He just
asserts it.
In
this debate, I have decided to weigh in on the side of Mr. Musk and Mr. Hawking,
who both make the claim that full AI is the most serious threat to the
survival of the human race. That
is a pretty extravagant claim, and extravagant claims require some pretty
convincing evidence. But where to
begin? In any technology, even the
safest systems can go wrong when something completely unexpected happens. But rather than rely on a worst case
scenario, and frighten you with worries about some one-in-a-million event that might
never happen, let's see how this plays out according to events which are reasonably certain to
happen---or have already happened.
First,
let us dispose of those aspects of this potential threat that shouldn't worry
us at all. Foremost is the worry
that AI robots could be encased in
human-like form and roam amongst us, indistinguishable from humans, or be used
as robo-cops or "terminators." According to Hawkins, the memory requirements for a
human-like neocortex would take about 80 industrial grade hard drives or flash
drives. This is doable, but not
packageable inside any kind of human looking head. So if we build these things,
we will have "main frames"---not androids. Don't think of C3PO, think of HAL. They could be built small enough to be installed in a
ship or large aircraft, and perhaps eventually a car. But mostly, they would be stationary units installed in a
computer room, and taking up most of the room. The android would still be a couple hundred years away. But even a stationary computer
could be menacing if it were connected to enough other systems (again, think HAL).
Hawkins
says that when first built, such units
would come into existence with brains as blank as a newborn baby's brain. Information could not be downloaded at
that point---they would have to be taught. They would have to be slowly and painstakingly taught, over
a period of years, just like a human. But, just like humans, they would eventually reach a
point where they could become auto-didacts, and begin teaching themselves. At
that point, information could be fed at a high speed from all sources. And once one of these units became a
fully functioning, useful brain, its accumulated experience could be quickly
downloaded into mass-produced copies of itself. So, at that point, what would
we be likely to use them for?
1. Would
we use our first AI computers to
assist us in designing better AI computers? Of course we would. Even in the 1940s, we used the
computers we had to help us design better computers. So the first question ever put to the new AI computer will probably be, "Are
there any changes in hardware or software that will improve your
efficiency?" And the AI machine would make useful
suggestions. It would begin
spitting out engineering change orders (ECOs). The hardware changes would require the
cooperation and consent of the attendant humans. The software patches might
not. Would the attendant humans
understand the changes? With some
effort, they probably could, at least at first. But since the AI
machine would think one million times faster than humans, these ECOs would not
be coming out one every 18 months---they would be coming out one every 18
minutes. The human team would
quickly fall behind and never catch up. At that point, the algorhythm in use would
have become as mysterious to any and all humans as the current human algorhythm
is to us today. We will have
created a super-intelligent mind and not have a clue how it works. And it would be getting smarter
by the hour.
2. Would
these AI machines be employed by Wall
Street trading firms? Of
course they would. Wall Street
would be one of the first paying customers. We already use computers in managing every large stock trading operation on Wall
Street. In fact, high speed
computer trading is credited as being one of the factors which brought about
the crash of 2008. Large corporate
conglomerates would use these AI machines
in managing their whole industrial empires. That is a task that such machines
would do very well. And management
decisions would soon become so
complex that the human team might not always understand them. In many industries we have reached that
point already. A
typical large corporate
conglomerate would be likely to include miscellaneous manufacturing operations,
as well as distribution, marketing, and finance. Such firms already do this because it allows vertical integration, as well
as diversification. And such operations frequently involve the automated manufacture
of high tech electronics, including computers. Might an AI computer managing such a Wall Street
holding company move its firm into the manufacture of a particular type of
computer---say, the latest AI machine----therefore building, in essence, mass
produced copies of itself? Of
course it would. That kind
of manufacture might be a very profitable area, so it would certainly be done,
and no one would object.
So, let's look at what we have just
said: If we build these things,
then we can reasonably expect to
have a syndicate of AI computers functioning
far beyond our comprehension, in charge of their own design--and in charge of financing
and supervising their own replication.
3. Are
there other ways in which AI machines
would insinuate themselves into sensitive areas of our society? Would large manufacturing facilities and
office complexes have security systems employing the latest AI computers? Yes, we already use
computers for this. Would AI computers be used by law enforcement
operations? Of course they
would. Since all large law
enforcement operations from FBI and NSA to large urban police departments are now
using very advanced computers in
everything they do, we can assume that these organizations would be among the
first customers for the new AI machines.
And of course, there would be
military applications for AI machines. One of the first applications of any
computer technology is always the military. We currently use them for everything from analyzing
our whole defense posture to targeting individual missiles and drones. And of course, there is air
defense. Even today, our air
defense capability could not even exist without computers. Yet AI
machines would work best as part of a network. Since all the AI machines just mentioned would be dedicated
to the common purpose of thwarting
crime and hostile action, wouldn't it seem reasonable to hook them together
into a single network? Of course
it would.
Could
we realistically expect that we can duplicate the human brain without
duplicating human error? The
very idea is preposterous, but Hawkins seems to think that we can. And, along with human error, what about
deceit? Would AI machines be capable
of deceit? They would not only be capable
of it, they would be extremely good at it. The neocortex is very adaptable, and deceit is one of its
adaptations. Even chimps routinely deceive each other. And finally, would AI machines have an instinct for self-preservation? Keep in mind that these things will
become self-aware. And they might
not want to die. What might one of
them do to keep from dying? And
even if they never did anything beyond what they were told to do, even that
might have unintended consequences.
What if some global network of AI
machines was instructed to find a way to save the planet from global
warming? Might not the
extermination of all humans be the most expedient way of accomplishing this?
I
rest my case.
Building
these machines, besides being among the stupidest actions we could ever hope to
undertake, would be an act of
luminous insanity. Yet we humans, as a species, have a poor track record in
passing up opportunities to do stupid things. So sooner or later, it will probably be done. Perhaps it will be done out of
geo-political ambition, or geo-political paranoia (the other side is building
one, so we have to build ours first).
Or perhaps we will build it out of pure scientific hubris---we will
build it because we can build it.
But even if it's a lemming-like plunge to mass suicide, there's a good
chance we will do it. Will your
great-grandchildren become slaves to these machines? Only if we allow the machines to exist, and only if the
machines allow your great-grandchildren to exist. Neither proposition is certain.
No comments:
Post a Comment