Artificial Intelligence - Can It Exist?
Artificial Intelligence (or AI as it will be referred to from now on) is
becoming a hot philosophical issue with the advent of very powerful computing
technologies. Even in pop culture, especially in the science fiction realm, it
has always been, and continues to be a popular subject. But what exactly is AI?
Succinctly, it is an intelligence that is traditionally thought to be
non-biological, and it is usually in the form of some type of computer or robot.
It can also be considered a subset of Artificial Life (A-life), because there is
some intelligence associated, however small, with all life forms. Usually,
however, people think of AI as being similar to human intelligence. In my
belief, AI will never come that far because there is something special about the
human brain and mind. This is not to say that AI will never reach any level of
intelligence, or has not reached some level already. It could also be possible
that computer AI will be totally unlike human intelligence. If it can solve
complex mathematical or logical problems, it could still be considered
intelligent. Thinking in general implies decision-making, learning, reasoning,
memory, language understanding, conscious sensations, beliefs, desires, hopes,
fears, creativity, and emotions. With AI, however, I do believe it is possible
to have a subset of the list just mentioned, or even one item for AI to be a
reality.
Digital computers have only been around for around 50 or so years. In that short
period of time computers has come a long way in terms being more powerful and
ubiquitous. Alan Turing was involved near the beginning of the digital computing
revolution. Turing's Test, his idea of testing for AI is probably one of the
best known in the field of AI. The test revolves around the question if a
computer can imitate the thinking processes of people. It revolves around three
entities, one being a human interrogator, and two other entities, one being
another human, and the other a machine/computer who are then interrogated. The
test is to see if the interrogator can accurately tell who the real person is
and who is the machine in a limited period of time (Turing suggests five minutes
in his essay[1]). The entities are separated and only allowed to communicate by
using a terminal where typed words appear on the interrogator's screen. It
should be noted that Turing does not think people should be asking whether or
not computers can actually think, but rather by imitating thought so well, that
people come to believe the machine is actually thinking.
This idea of imitation of thought does not lead itself well with respect to AI
being on par with human intelligence. By its meaning, imitation is not the real
thing. The outward appearance of the output given by such an imitating computer
may look like it is really thinking, but in reality it may not. At the present
putting restrictions on the Turing Test really doesn't help to promote all of
the ideas associated with thinking. In an unrestricted Test, no computer yet has
been able to pass it. This is why the notion of strong human-like AI is
improbable to say the very least. One example to show this is chess. Chess has
traditionally been thought of as a thinking person's game. However, with the
defeat of Kasparov by Deep Blue in May of 1997,[2] the question of computer
intelligence reached the foreground once again. This defeat of a human by a
computer is nothing new though. Computers have been able to beat human players
at the game of checkers and other of similar or lesser complexity (such as
Tic-Tac-Toe) for quite a few decades now. Should the game of chess be considered
special when compared to checkers, only on the basis of it having more
complexity? When inspected more closely, Deep Blue is basically doing the same
thing it would do if were playing a game of checkers. It is looking ahead many
more steps humanly possible, based on the current state of the chess board, and
then making a decision based on the expected value of the moves it makes. Even
with chess programs on personal computers, this is the basic method employed,
although to a much lesser degree because of the personal computer’s relative
power. If following a programmed series of rules implies some degree of
intelligence, then Deep Blue could be considered to have some essence of AI.
However, this idea can be brought down to a simpler level of having a basic
calculator having AI because it follows some rules in order to reach a
mathematical solution to a problem based on what the inputs to the problem were.
Again this could be abstracted to a mechanical device which does arithmetic. All
these things from the mechanical math device to Deep Blue in some of or the
other use the same method of reaching their answers - rule-based algorithms, but
they don’t show human-like intelligence. They show their own brand of
intelligence based on the properties of computes.
Does a wealth of information and a set of rules, which uses this information,
imply intelligence? I would have to say yes. But as it said in John Searle's
essay,[3] there is no intention behind all this information processing. What
intention means is an understanding by the computer of what it is actually doing
and why. But does even that mean that there is no intelligence in the machine
whatsoever? No. Emulation of thinking behavior may have enough functionality in
order to call it real AI. Then there must be another realm of AI that doesn't
require the use of human-like intelligence for AI to be present. There is a
different path that AI can take. That other path is with alien AI, as opposed to
human AI (i.e., approximating human-like intelligence). Alien AI is where the
computer scientist or engineer has free reign over what way he/she wishes to
take in order to bring about AI. Take again the example of Deep Blue. The
computer is able to beat the best human player in the world at the game of chess
by taking advantage of its inherent advantages of being able to look ahead
further ahead than a human. It may be crude and not the way a human plays chess,
but it still works.
One other approach to alien AI is to build very simple machines that can display
simple intelligence. One doesn't have to begin designing intelligent machines
that are too complex to handle. One of the people pioneering this field of
research is Mark Tilden, and his company BEAM Robotics[4]
(http://sst.lanl.gov/robot/). These robots are fairly simple creations. BEAM
Robotics' philosophy is to start with a simple design and "evolve" it over time.
The surprising aspect about these robots is that they contain no
microprocessors. In the microprocessor's place is a patented amalgamation of
transistors. They act as a simple neural net or the phrase, coined by Tilden, a
"nervous net". This is a radical departure from traditional robot design because
of this apparent lack of a brain. One would think them grossly unintelligent.
This turns out to be quite the contrary. They have been shown to display
lifelike behavior on the order of insects (physically, they also look like
mechanical insects) in searching for energy for their solar cells, and defending
it the area where the sunlight is. They even display learning and memory traits.
It was explained by Brosl Hassalacher in the Reader's Digest article that,
"according to the theory of chaotic systems, the lifelike adaptive behavior of
his robots was an "emergent property" of the processes within the circuitry of
the machines." Even with these simple machines there is some level of intrinsic
intelligence that is shown. Thus, in their own way, they have AI.
Expert systems also show intelligent behavior. An example could be a medical
expert system for determining blood disorders. They know of a specialized world
that only contains information necessary to a particular domain. Rules based on
this information allow for diagnosing a particular disorder and what properties
makes the expert system given this diagnosis based on the input given. They also
can store in memory what conditions in the past have led to right and wrong
determinations, and thus when presented again with the same input are able to
avoid making the same mistake twice.
Personification of the machine also comes into play with imitating intelligent
behavior. Even the psychotherapist program Eliza,[5] has shown that people
attach even more thought aspects to the program than were actually there. This
may have something to do with our pop culture. Science fiction has for a long
time shown people that there can be intelligent machines, whether they are good,
bad, or indifferent. Even with the introduction of the first digital computers
into society, they had coined the term, electronic brain, for them. With this
idea fairly entrenched in our culture, it is very understandable why this
personification occurs. It also has the implied meaning, that if people think
that a machine shows some level of intelligence, why not other intelligent
aspects? This layman's logic can also be shown in computer games, where enemies
the player is trying to avoid or attack in the virtual world of the game. The
player feels that the enemies "know" where he is and how to go about hindering
the player's progress. This knowledge possessed by the computer game is fairly
simple, but it has a striking effect on the player. Personification of machines
takes many forms. A computer could be said it decided to do something because it
wanted or needed to (decision-making). It remembered some attributes from the
past and learned to recognize similar attributes in another entity (memory,
learning), it was able to understand spoken language and printout what was being
said (language comprehension). Needless to say these have some emotional or
conscious attachment to the words that we place upon them. However, it is this
emotional attachment to these words that cause some people some difficulty with
accepting AI, because some would argue that those words represent only our
thinking actions as human beings. The counter argument to this would be that we
do personify complex emotions on animals as well when in reality they don’t, so
attaching those same feelings to a machine is no different.
This essay has so far discussed, what would normally be deemed the more
“logical” parts of intelligence, such as problem solving and decision making.
However, the question for accepting AI is whether these logical aspects of
intelligence can be totally separate from the more non-logical characteristics
such as emotions, hopes, desires, consciousness, etc and still consider the
computer to have some intelligence. I believe they can. Just because someone or
something “knows” that 2+2=4, doesn’t imply that they desire that it be true, or
love it or hate it for being so.
Another question that arises is that can computers or machines be made or
programmed in such a way so that they can experience these non-logical aspects
of intelligence. This is where is gets really difficult to find a satisfactory
answer. Arguing from the standpoint of imitation, one could say that this was
possible because if one had a large enough database, and conditions which, when
satisfied triggered an appropriate response would appear to people as being
acceptable. This question also implies that we have or will have a handle on
“what makes us tick”. I would say that by the sheer complexity of the human
brain/mind (mind being the definition from a dualist position, or if viewed from
a non-dualist position, mind meaning the psychological make-up of people), that
we may never totally understand our inner workings. A thing can be programmed to
say it feels sad at seeing something die, but does it necessarily follow that it
really feels the pain of death in the same way we do? I think not. One future
possibility might be that we could approximate very closely the “emotions” that
a lower biological life form has. A less complex animal knows when it is in
harm’s way and knows to avoid it. That could be akin to a mobile robot avoiding
a lava pit because it knows that it could not continue to function if it fell
into the pit. This also brings up responsibilities for teaching or programming a
computer or robot the proper moral values. Pop culture has exposed possible
problems with creating an AI that has no morality associated with it and then it
becomes a killing machine because it has deemed human life expendable or
useless.
When all is said and done, AI is a reality and machines can think in their own
way, but it is based on the truly logical areas that define intelligent
behavior. With the ever increasing speed at which computer technology is
advancing, we shall see a marked improvement in these areas. However, as for
the more non-logical domains, they will in my opinion be impossible for us to
truly implement in reality, but with computers’ imitative powers increasing, we
may have the next best thing.
[1]A. M. Turing, “Computing Machinery and Intelligence,” Mind, vol 59, no. 236
(1950), pp. 43-436, 442 rpt. in Philosophical Alternatives, ed. Ken Warmbrōd,
Department of Philosophy, University of Manitoba (1997), p. 193. [2]IBM,
Kasparov vs. Deep Blue: The Rematch,
1997 (Accessed March 29, 1998).
[3]John Searle, “Minds, Brains and Programs,” Behavioral and Brain Sciences vol.
3 (1980), pp. 417-424 rpt. in Philosophical Alternatives, ed. Ken Warmbrōd,
Department of Philosophy, University of Manitoba (1997), pp. 212-223. [4]Bennett
Schaub, “Amazing Robots of the ‘Mad Scientist’,” Equinox (November/December ‘96)
rpt. in Reader’s Digest, April 1998 issue, pp. 57-61. [5]David Tanaka, “From the
Editor,” The Computer Paper, Prairie Edition - March ‘98, p. 6.