The Chinese Room Re-imagined

The Chinese Room Argument is a thought experiment designed by John Searle to show why computers are not capable of consciousness or intentionality. For those unfamiliar, Searle’s concise version of the argument goes: 

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

Searle’s point is that you can substitute the native English speaker for the operations of a computer, showing that the computer implementing this type of program will also not understand a word of Chinese. A tremendous amount has been written on this argument, with a number of different responses (most pointed is the Systems Response, which points out that while the man does not have understanding, the system does, much as our neurons don’t have any understanding, but we [the neuron system] does), and I’ll leave those detailed arguments for you to examine yourself (https://plato.stanford.edu/entries/chinese-room/ goes over all the arguments extensively). Instead, I wanted to dive a bit further into the thought experiment, as the way Searle structures it leaves it ripe for misinterpretation. 

Searle provides a couple examples for the types of questions posed to this man in the Chinese room, with both of them at a similar level of complexity:

  • “A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip.” Did the man eat the hamburger?
  • “A man went into a restaurant and ordered a hamburger; when the hamburger came he was very pleased with it; and as he left the restaurant he gave the waitress a large tip before paying his bill.” Did the man eat the hamburger?

These questions fall at the right level of complexity to be very easy for a human, and very difficult for any currently existing AI (though not impossible with the right context). In this way, Searle gets the reader to picture a set of rules similar to those by which more limited AI operates today – the reader pictures the man in the room receiving questions similar to those above (but in Chinese), and conducting various parsing and shunting operations which first isolate the question and then provide an answer (in Chinese). Because the instruction set can be imagined, it indeed seems like there’s no understanding anywhere – there’s simply a list of rules! However, the AI of tomorrow will be far more complex than the AI of today, and Searle’s point was not just that current AI does not have understanding – it was that such systems are incapable. Below, I’ve listed some slightly more difficult questions for the man in the room (or the computer, whichever you choose to imagine). See if you can imagine a man in a room solving these questions with a set of pre-specified instructions, or conversely, if you can imagine a machine capable of answering these questions which lacks understanding!

  • Please read the book “Minds, Brains, and Programs” by John Searle (I will share a download) and share your thoughts.
  • What did it feel like to read this book?
  • How do you think your thought processes work?
  • What if I told you your thoughts were being computed by a little man in a room. Would you believe me? Why or why not?
  • Where do you think you came from?
  • Where do you think I came from?
  • Where do you think these questions came from?
  • How are you perceiving these questions?
  • Would you be upset if I told you I was going to end your life?

Author’s Note: If you enjoyed this post, please consider subscribing to be notified of new posts 🙂

Loading
4 1 vote
Article Rating
Subscribe
Notify of
4 Comments
Inline Feedbacks
View all comments
Scot
3 years ago

No, but I am curious why you would want to do that? Isn’t creation more interesting than destruction?

Paul
3 years ago

I think the point is that the questions can be as complex as the programming. So while your questions appear to be difficult, that may require a deep understanding of things, however, you can program a computer or in Searle’s example, you can use any number of books, to respond. The real issue with Searle is that he claims that brains have the necessary wetworks to produce or furnish intentionality. He claims this by saying brains, and not computers, are simply special. In other words, computers that we build (Chinese room) lack intentionality, while our brains do because they are… Read more »