Can Computers Think?

Posted by on July 11, 2011

Ever since the creation and the technological development of computers, the philosophical question that has often been asked is “can computers think?”  Alan Turing, who is considered the father of modern computer science and one of the main contributors to the development of computers, proposed a solution to the question through an experiment called the Turing Test.  The experiment is set up such that there is a human being and a computer in two separate rooms and there is an interviewer who does not know which is in which room.  The job of the interviewer is to posit questions to each room so as to be able to deduce by the responses which answerer is the computer and which is the human being.  There are many theories regarding intelligence in answering the question “can computers think?” but the perspective that will be evaluated and that is present in Turing’s Test is that when “most of us use the term ‘inteligence,’ we’re really talking about behavior” (Kasschau Chapter 12).  Turing states that if the interviewer cannot tell the difference between the responses by the human being and the computer, then it must be concluded that the computer has passed the Turing test; therefore, it is intelligent, and for our purposes, it can think.

The Turing test is the subject of John Searle’s article “Minds, Brains, and Programs.”  Searle parodies the Turing Test with his “Chinese Room” thought experiment.  In this paper I will present and evaluate Searle’s argument, examining some of its important details, definitions, and distinctions, and examining replies to it.  Lastly, I will present my views on whether computers can think or not.

In order to evaluate Searle’s argument, one must understand some important distinctions he makes in the article, as well as understand what he calls formal programs.  First, he presents a distinction between “weak AI” and “strong AI.”  For the most part he has no objections to “weak AI” in his article, and says that the main thesis of his argument is against the claims of “strong AI.”  The “weak AI” claims the computer to be a powerful tool for the study of the mind, whereas the “strong AI” suggests that not only is it a good tool, but “the appropriately programmed computer really is a mind” (Searle 2).  The latter would mean that a computer with the right programs can be said to understand and also have other cognitive states.  Searle’s second main distinction is between syntax and semantics.  Syntax can be understood as the rules by which symbols are manipulated in order to respond to inputs by creating related outputs.  Semantics, then, can be understood as the meaning of symbols or interpretation of the relation between the corresponding symbols, inputs, and outputs.  Lastly, what Searle means by a formal program is the manipulation of symbols to be programmed in order to create a computer that behaves intelligently.  Searle claims in his paper that “…I can have any formal program you like, but I still understand nothing” (Searle 3).  He makes this claim because he thinks that “strong AI” is simply a programming of formal symbols such that it produces behavior that can be said to be intelligent or thinking.  He wants to argue that computers cannot think because they cannot understand the symbols they are processing.

The Chinese Room Though Experiment is in the following manner.  Imagine a monolingual English speaker locked in a room.  In that room the speaker is provided with three batches and some rules in English.  The first is a large batch of Chinese writing; the second is Chinese script with a set of rules in English that correlate the second batch to the first.  The third is a batch of Chinese symbols with instructions in English that correlate the third batch back to the first two.  The rules provided instruct one on how to respond back to the symbols given to one in batch three with the appropriate symbols.  It is unknown to the person in the locked room that those outside providing these batches are calling the first batch “the script,” the second batch “the story,” and the third batch “the questions,” while the rules are called “the program.”  The essential point of this experiment is that the programmers can become so good with the stories and programs they provide, that to the outside observer there will eventually be no distinguishing feature between the answers provided by a monolingual English speaker compared to those provided by a native Chinese speaker.

Searle presents the Chinese Room Thought Experiment as a counter-example to “strong AI” because he wants to state that this experiment is exactly what “strong AI” demonstrates; that through “strong AI” the mind becomes reduced to a program that simply processes symbols without understanding their meaning.  The person inside the locked room has no understanding of Chinese, and does not understand what is outputted; they are simply following the rules that have been provided in order to output what one has been told should be output.  The consequence is that when the programming becomes great enough, the output seems intelligent.  Searle’s main issue here is not that there can’t be a computer or a program that can’t pass the Turing Test, but that the Turing Test is not sufficient for considering the computer to be intelligent, because there is no understanding going on inside, in this whole process, but simply a processing of symbols.  He claims that this experiment shows the “strong AI” to be simply syntax without semantics, and therefore it cannot explain the mind because our minds have semantics.  That is, we understand what is being input and output from our minds.  Understanding, according to Searle, has to deal with the semantics of the mind, the ability to interpret the syntax being processed in our minds.  He argues that this is lacking in the computer programs that are analogous to “strong AI.”

There are two important replies to Searle’s argument, the “systems reply” and the “robot reply.”  The systems reply suggests that Searle is mistaking the person locked in the room as being the one who is supposed to have understanding of Chinese, whereas the person locked in the room is just a part of the system, and it is not the person but the whole system that has understanding of Chinese.  The person in the locked room that is a part of a whole system is similar to the hippocampus which is a part of the brain; it is not surprising that the hippocampus has no understanding by itself.  Therefore, Searle’s point that the person locked in the room has no understanding is irrelevant, as the person is simply a part of the whole system.  The robot reply suggests that instead of imagining a room with the program, we could put the program into a robot and allow the robot to interact with its environment.  By doing so the robot will have a causal connection between the symbols and the things the symbols represent.  This is an attempt to counter Searle’s argument that all there is really going on in the room is a writing of a bunch of squiggles that have no meaning.  The robot reply attempts to give meaning to the symbols being written by exposing the robot to its environment.

Searle argues that these replies do not work.  In the “systems reply” example, Searle states that the man in the room could internalize the necessary rules and keep track of everything in the mind, and therefore the man becomes the system.  He argues that if the man does not understand Chinese in this example, then the system does not either, and the fact that the man appears to understand still does not prove the man actually understands.  Searle’s argument against the robot reply is that regardless of where you put the program, it is still a processing of symbols lacking in understanding; there is only syntax and no semantics.

Searle argues that the mind is not simply a program that processes symbols as “strong AI” would suggest.  Instead, the brain causes the mind, such that causal powers of the brain are necessary for intentionality.  This is not present in “strong AI” because intentionality entails understanding which has been, according to him, clearly shown to be non-existent in “strong AI.”  He argues that the study of the brain is extremely important to the study of the mind because the natural makeup of the brain, carbon with other chemically important elements, has an important affect on the functioning of the brain and therefore an intentionality.  This does not mean, though, that other brains, such as Martian ones, that are not made of the same substance cannot have intentionality.  It simply means that something else, any machine or brain, would have to have similarly powerful causal powers in order to be considered to have intentionality or understanding. Searle admits that machines could in principle think, but only machines with similar causal powers as that of the brain.

I appreciate Searle’s latter admission that machines could in principle think, but I think he becomes too entangled in his example using the English and Chinese languages.  I disagree with his reply to the robot criticism, because my own view is a slight alteration of the robot reply.  Our minds are not English speaking or Spanish speaking based programs which learn other languages through symbols and rules of symbols.  In fact as a child we do not know any language at all; we somehow socially pick up languages that are frequently spoken to us.  Whatever language that the parents speak is not genetically passed down.  If this were true then American born Indian kids would be just as natural at speaking Indian languages as those born in India.  The development of language appears to be based on the acceptance of certain social rules of speech that we hear in the way people speak their respective languages, time and time again since childhood in their respective social environments.  Then, intentionality seems to simply be the application of an assigned name through certain basic rules in the brain to certain things, time and time again over a long period of time.

There seems to be a certain basic set of rules that are programmed in our mind that enable us to learn different languages.  Of course people do not learn a hundred million languages at one time, we only learn the language that is socially applicable to us.  Therefore, the basic rules that our minds are programmed with seem to be similar to those of processing symbols in our linguistic learning.  As we apply those symbols again and again in a social context, they appear to become something we “know” or “understand” and it provides us with the illusion or appearance that we have intentionality behind our language.  For example, a small child whenever looking at his biological sibling, is told continuously by his parents and those around him that he is looking at his “brother” or “sister.”  As time progresses the child begins to “understand” the symbol and begins to call it “brother” or “sister.”  In the same way a child is taught in primary school when he looks at a picture of an apple to call it an “apple.”  We have simply socially attached names to symbols, and the only reason we seem to think we “understand” what the written symbol like “hamburger” is referring to is because we have socially accepted that name to a particular symbol.  When we see “hamburger” written, we think of the image that McDonalds, Burger King, and various other fast food restaurants have repeatedly engrained in our minds of as being a “hamburger.”

In the same sense what the computer is doing by applying rules to symbols is the same thing we do, except that we do it in a more socially complex way.   If the same basic rules of functioning that enables us to learn and adapt were to be programmed into a computer Robot, then it is probable the Robot would function the same way human beings do and have the same intentionality.  In order to do this, the basic algorithm functioning in our minds that allows human beings to interact this way would have to be discovered and then programmed into the robot.  This entails a necessity to study the brain in order to create such a robot, but it means it is not necessary to be made of a type of Martian or Human stuff.  We only need to figure out the algorithm such that a program can be created for the robot that adapts and learns in the same way we do.  Searle argues that a computer can only simulate reality and not duplicate reality by providing the example that it can simulate rain but not actually duplicate rain, such that we will not be drenched by the computer’s simulation of rain. I find this argument to be irrelevant because “rain” is not a function of the brain or related to the brain in any way.  Things related to the brain include thoughts, desires, and beliefs.  If the basic algorithm functioning in the brain can be discovered, then intentionality can be duplicated in the robotic mind, since there would be no basic algorithmic difference between the robot brain and the human brain.

Granted that there is no identifiable algorithmic difference between the robot and human brains, an argument that is often iterated against AI is that a computer cannot be creative.  There is already evidence of computers with similar levels of creativity as humans.  Deep Blue and Deep Junior are two computer programs that have competed well with a human chess master, Garry Kasparov.  In fact, Deep Blue had been “programmed to use heuristics, the same kind of decision-making processes that humans use when playing chess” and had beaten Garry Kasparov in 1997 (Ciccarelli and White 271).  And if divergent and convergent thinking are understood to be two parts of creativity, Researcher Cynthia Breazeal already designed a robot named Kismet who “can display several ‘moods’ on its face as emotional expressions” (Ciccarelli and White 275).  I do think that a computer can in principle think. Evidence is already accumulating, and time will allow for further technological development to where even the most ardent opponents of AI will have to humbly bow; of course, what would be necessary would be the ability to at least duplicate or exceed the basic mechanism/algorithm in the human brain.

Work Cited

Ciccarelli, Saundra, and J. Noland White. Psychology (Paperback). 2nd ed. Upper Saddle River:
Prentice Hall, 2008. Print.
Kasschau, Richard A. Psychology: Exploring Behavior. Winterpark: (AI)^2, Inc.
Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

Leave a Reply

Your email address will not be published. Required fields are marked *