Worlds of David Darling
Encyclopedia of Science
Home > Encyclopedia of Science

Chinese Room

The Chinese Room is an argument first put forward by the American philosopher John Searle (1932–) in 1980 in an attempt to show that the human mind is not a computer and that the Turing test is not adequate to prove that a machine can have strong artificial intelligence (strong AI) – in other words, can think in a humanlike way.1 In the Chinese Room scenario, a person who understands no Chinese sits in a room into which written Chinese characters are passed. The person uses a complex set of rules, established ahead of time, to manipulate these characters, and pass other characters out of the room. The idea is that a Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room. Searle maintains that if such a system could indeed pass a Turing test, the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room.

Searle proceeds systematically to refute the claims of strong AI by positioning himself as the one who manipulates the Chinese symbols. The first claim is that a system able to pass the Turing test understands the input and output. Searle replies that as the "computer" in the Chinese Room, he gains no understanding of Chinese by simply manipulating the symbols according to the formal program (the complex translation rules). The operator of the room need not have any understanding of what the interviewer is asking, or of the replies that he is producing. He may not even know that there is a question-and-answer session going on outside the room. The second claim of strong AI to which Searle objects is the claim that the system explains human understanding. Searle asserts that since the system is functioning – in this case passing the Turing test – and yet there is no understanding on the part of the operator, then the system does not understand and therefore could not explain human understanding.


  1. Searle, John R. "Minds, Brains and Programs" in The Brain and Behavioral Sciences, vol. 3. Cambridge, England: Cambridge University Press, 1980.

Related category