When asked to explain how artificial intelligence works, most people will give either an analogy as a response or simply explain that they don’t know. If you then ask those people if artificial intelligence is able to understand the texts we enter into it, the answer would be split. What does it mean to understand? With a thought experiment that has been dubbed “the Chinese Room”, Searle[1]Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/s0140525x00005756 attempts to illustrate to the world that an AI does not reason and understand in the same way humans do. In this essay I will take a critical look at this thought experiment to unveil that it is flawed in its presentation of AI’s understanding of natural language. I will do this by first looking at how Searle’s experiment fails to represent reality. This will be done by looking at Schank’s work on which Searle based his thought experiment, based on which I will argue that by ignoring the complexity of AI Searle presents a misleading thought experiment. In the end this will lead to the conclusion that Searle’s thought experiment is flawed in assessing the capability of understanding an AI may have.
Let’s take a look at the thought experiment in question first. Below is a summarised version of the thought experiment.
“The Chinese Room” describes a situation where a person who doesn’t speak Chinese is given a set of rules in English for translating Chinese characters into other Chinese characters. The person is placed in a room with slots for inputting and outputting Chinese characters and they follow the rules without understanding what the output means. To outsiders it seems like there is a native Chinese speaker in the room.
This thought experiment is meant to illustrate that an AI does not think of its own. Knowing that conclusion we can see that this thought experiment illustrates that perfectly well. The person translating the Chinese characters into other Chinese characters does not put any thought into what they are translating, they are just following rules. But does this idea of an AI being a computer that just follows rules match up with the real world?
Gendler[2]Gendler, T. (2000). Thought experiment: On the powers and limits of imaginary cases. Routledge. https://doi.org/10.4324/9781315054117 describes a three part structure to a thought experiment.
- An imaginary scenario is described.
- An argument is offered that tries to extract the correct evaluation from that scenario.
- That evaluation then is used to reveal something about cases beyond the scenario.
Taking a look at the “Chinese Room” thought experiment we can extrapolate each of these parts. The scenario is that of the person changing Chinese characters into other Chinese characters using a set of rules in an enclosed room. The argument is then made that in this process the person is not acting autonomous, they are not thinking for themselves when processing the characters. This is then applied to AI. AI only follows rules to produce the results we want from it. In essence Searle is arguing that all we want from the AI is to beat the Turing test[3]Turing, A. M. (1950). I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/lix.236.433. But I argue that that does not represent reality.
While the described situation in the thought experiment is imaginable and the argument establishing the evaluation of the situation is sound, the conclusion is inapplicable to AI. Inapplicability in this case refers to the negative counterpart to the third part of Gendler’s tripartite structure, which she claims to be a useful way to provide criticism[2]. So how come that the thought experiment is inapplicable? For that we have to look at how Searle came to create the thought experiment.
Searle created the thought experiment in response to the work of Roger Schank and his colleagues at Yale. Schank created and was working on programs to simulate the human ability to understand and interpret stories. The interesting thing here is the contrast in the complexity described as to how an AI would interpret input. How it is described in the work of Schank[4]Schank, R. C., & Abelson, R. P. (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Psychology Press., pp. 20-22 is that an AI researcher who would model processes, would go on to program each subprocess of that model. To see what inputs and outputs those subprocesses have. A ‘script’ as Schank then describes, contains the information to understand the subtext of a story that someone might ask the AI questions about. This ‘script’ contains multitudes of these complex subprocesses to resemble the natural language processing of humans. An example he uses is that of asking directions. If you are told that to get to Coney Island you have to take the ‘N’ train to the last stop, you would understand what to do. An AI would need info on how to walk and pay for subways to understand this. All this extra information is the ‘script’ and would be accessed and acquired through these subprocesses. Compare this to how Searle describes a ‘script’ in his thought experiment. He labels the first batch of Chinese characters and instructions on how to translate them into other Chinese characters as the ‘script’. In an attempt to show how an AI does not have the ability to understand its input, the sketched situation has been simplified to an extent where it does not resemble reality any longer.
The importance of presenting reality in this thought experiment comes from the fact of how we look at the concept of understanding. As far back as Locke[5]Locke, J. (1894). An Essay Concerning Human Understanding.it has been argued that humans have no innate ideas. All ideas are acquired by experience or reflection. Building on this idea I would argue that if we can replicate the processes to acquire knowledge, and by extent understanding, in an AI, then that AI would have the same sense of understanding a human would. These processes are complex and by simplifying these processes in the thought experiment, Searle presents an incorrect view of reality.
A thought experiment is meant to apply to reality, but by simplifying reality to such an extent as Searle does in his thought experiment it no longer applies to reality. Therefore we cannot take the thought experiment into consideration when talking about the capability of understanding and reasoning an artificial intelligence might or might not have.