UQ Students should read the Disclaimer & Warning

Note: This page dates from 2005, and is kept for historical purposes.

PHIL1000 – Final Essay – Mind and Body: Computationalism

(I achieved 14 out of 20, average 13.26)

Artificial intelligence would seem to be becoming an increasingly realistic prospect as computers increase in speed exponentially every year; but what exactly is “artificial intelligence”, and will we ever see man-made machines capable of actual understanding? Can a computer think? Is your brain merely a form of computer? John Searle (1932– ), a professor of philosophy at the University of California, argues that the very concept of artificial intelligence is fundamentally flawed, and that the human mind is not merely a program running on a computer, in this case, the brain. We will explain and evaluate his famous “Chinese Room” thought experiment, and discuss whether his arguments provide a decisive objection to the concept of “thinking computers”, or alternatively, the computational theory of cognition.

First, we must have a clear understanding of the term “artificial intelligence”. Searle distinguishes between “strong” and “weak” or “cautious” AI (artificial intelligence). In weak AI, a computer is seen as nothing more than a powerful computational tool—an unarguable fact nowadays—whereas strong AI holds that an appropriately programmed computer is a mind, complete with its own cognitive states, and the ability to actually understand.

Cognitive terms such as “understand”, “know”, and so on, have come to be commonly applied to computers and various machines. We frequently say, “The video recorder knows when to turn on, because I set its timer last night”, or “The teller machine understands my request for money, and gives me the correct amount”; but when we say this, we don’t really mean that the device in question actually has an understanding, in the sense that it possesses intentionality. A teller machine or a video recorder is merely following a formal procedure; it has no intent and no power to do otherwise. For a computer to be artificially intelligent, it would need to possess its own intentional mental states—the ability to, of its self, understand,conceive its own ideas, its own responses to problems; this would be a “thinking” computer—artificial intelligence.

Searle provides a decisive objection to the concept of “thinking computers” using his, now famous, “Chinese Room” thought experiment. In this Gedankenexperiment, Searle imagines that he himself, as an English-speaker who understands no Chinese, is placed within a locked room and given a batch of Chinese symbols. It is important to note that he does not understand any Chinese, so to him these symbols are nothing more than meaningless squiggles. He is then given another batch of Chinese symbols, along with English instructions that enable him to correlate one set of symbols with another set. In his original example, Searle has a third set of symbols and another set of instructions, but we can simplify this by omitting them. In essence, Searle is mimicking a computer program; his first batch of Chinese symbols can be seen as the input to a computer program, his second batch of Chinese symbols the output, and his English instructions—the only part that he actually understands—as the program. Using this “program”, Searle can accept Chinese symbols, and return other Chinese symbols. If the English instructions—the program—were written in such a way that the Chinese symbols he returns form correct answers to questions he is given, also via Chinese symbols, then the complete system can successfully accept and answer Chinese questions. It can, of course, also accept English questions and return English answers—as Searle understands English.

The important point—the essence of the whole argument, in fact—is that from an exterior view, that is, from the perspective of someone outside Searle’s locked room, the complete system appears to understand Chinese, yet Searle himself is the only part of the system that possesses understanding, and he does not understand any Chinese whatsoever—he is not even aware that the symbols are Chinese. It would be absurd to claim that a batch of Chinese symbols possesses understanding, or even that the combination of Chinese symbols and English instructions somehow possesses understanding. It is equally absurd to claim that Searle somehow understands Chinese, simply because the system is correctly returning valid Chinese answers to Chinese questions—to Searle, they are nothing more than meaningless squiggles. In theory, if he became incredibly adept at manipulating Chinese symbols, it would be impossible for someone external to his locked room to tell if there were a native Chinese speaker within, understanding their questions, or a system such as Searle envisaged, understanding only English instructions detailing how to manipulate Chinese symbols, and not understanding Chinese at all.

Searle has set up a scenario that is essentially a simulation of a computer program, and which to all intents and purposes appears to understand Chinese, but where it is clearly absurd to claim that it actually does understand Chinese. The only part of the scenario possessing any understanding at all is Searle himself, and he clearly does not understand Chinese. The argument here is, of course, that artificial intelligence is the same as this scenario—a computer, even if it can successfully mimic the understanding of Chinese (or any such task), still does not actually understand Chinese (or whatever task it was performing). The computer itself does not understand anything; it merely follows some procedural program. It makes no sense to claim the program running on the computer understands anything either, as it is simply executed by the computer, according to predetermined rules. Thus, artificial intelligence (at least in the sense of “strong” AI) is indeed artificial—“thinking machines” do not actually think or understand, although they may indeed possess “weak” AI, and the ability to correctly simulate various human situations or scenarios.

This, I believe, quite conclusively shows that computers cannot think, or more accurately, that a program running on a computer cannot, of itself, provide intentionality as a mind does, although it brings us to something of a circular argument. So far, we have assumed that a computer is a particular type of machine, namely, a digital computer or Turing machine; or more explicitly, we have assumed that brains are not computers, as the brain can, quite clearly, possess a mind (the ability to think), yet computers can’t. Arguing that a computer cannot possess a mind, yet a brain can, actually begs the question—one which we shall discuss now.

We have so far shown that while computers cannot think, brains can—yet we are now unsure whether or not a brain is, in fact, a computer. The quickest way out of this dilemma is the simple logic that if a computer cannot think but a brain can, then a brain is not a computer—but it would pay for us to take a little more care with this question, lest we fall right back into the fallacy of petitio principii. The computational theory of cognition claims that human cognition is computational (somewhat obviously); that is, the human brain (assumed as the centre of human cognition, the actual location of which is irrelevant to our argument) is indeed a computer, and that the mind, with its understanding and intentionality, is nothing more than a collection of computational programs executing within the brain.

The main objection to this theory, as shown by Searle’s Chinese Room thought experiment, is that we can postulate a scenario where what is, to all intents and purposes, a “brain” (or more accurately, a “mind”) as envisaged by the computational theory of cognition (the Chinese Room as a complete system) does not actually understand anything. Surely then, if such a “mind” can be shown to possess no understanding at all, yet a human mind quite clearly does, then it follows that the human mind is not merely a collection of computational programs (or processes) running within a brain. The human mind must possess something extra; something that gives intentionality to what otherwise appears to be a collection of formal processes. What exactly this mysterious thing is—a subject of much discussion both philosophically and psychologically—is not relevant; what is relevant is that there is some thing, some difference, between a formal collection of programs, and a mind, and it seems impossible that any program, or collection of programs, can ever possess this intangible “thing”. Unless we can show that it is possible to not only emulate a human brain using a computer, but to actually duplicate a human mind using a collection of programs; to have a machine possessing mental states, intentionality, beliefs, desires, and all the other human traits associated with intelligence, purely because it is running a collection of programs, then we must conclude that the human mind is not a computer—and we can show that for any instantiation of a program or collection of programs, all we have is the same scenario as Searle’s Chinese Room.

This quite conclusively shows that the human mind is not a computer or collection of programs running on a computer, but not necessarily that the human brain is not a computer, as it is possible that the intangible “something” that is the difference between a collection of programs and a mind is not actually in the brain, but is something else entirely. In Searle’s “Chinese Room” thought experiment, it can only be argued that the system possesses some form of intelligence if the system itself contains some intelligence (in this case, Searle himself—or at least his mind), an argument ad infinitum. For example, while we have shown that the “Chinese Room” does not understand Chinese (although it does mimic the understanding of Chinese), it does understand English, but only because Searle himself understands English. Taking Searle out of the “Chinese Room” leaves nothing—a room devoid of any understanding whatsoever. Similarly, taking the intangible “something” out of Searle leaves a man devoid of understanding—essentially a series of programs running without intent, a brain without a mind. Put simply, a brain may indeed be a computer (there do seem to be similarities in the functioning of the brain and a computer, and we have no reason to suspect it is not—but nor is there any proof that it is), but a mind is certainly not just a computer, or collection of programs running on one.

To conclude, it seems clear that artificial intelligence, while seemingly feasible, is not possible in the sense that a program or collection of programs running on a computer can never duplicate a mind, with intent, belief, understanding—all the causal properties of the human mind inherent in intelligence itself. That is not to say that some form of man-made intelligence can never be achieved, just that it can never be achieved solely through the use of procedural programs, irregardless of the computer (or indeed brain) they are running on, its speed, or its other properties—and that this is the definition of artificial intelligence. It also seems clear that, while a brain may indeed be a form of computer, as a mind certainly is not, whether a brain is itself a form of computer is something of a moot question, and one better answered by science.

Bibliography

Martin, N 2005, ‘The Mind is a Macintosh; the Brain is a Bulldozer’, Personal PHIL1000 Essay Preparation Notes, University of Queensland.
Putnam, H 1975, ‘Turing Machines’, Perry J & Bratman M (eds), Introduction to Philosophy, Oxford, 3rd ed., 1998, pp. 354-355.
Searle, JK 1980, ‘Minds, Brains, and Programs’, Perry J & Bratman M (eds), Introduction to Philosophy, Oxford, 3rd ed., 1998, pp. 368-381.
Turing, AM 1950, ‘Computing Machinery and Intelligence’, Perry J & Bratman M (eds), Introduction to Philosophy, Oxford, 3rd ed., 1998, pp. 355-368.