The Chinese Room Re-imagined: Reddit Discussion

I recently shared my last post on the Chinese room to Reddit, which led to some productive discussions in the comments. I thought it may be helpful to share some excerpts from one of those threads here, as it provides a slightly different way of thinking about the issue. Hope you find this discourse interesting / illuminating – and if not, happy to continue the discussion in the comments section here!

Reddit User Freded21:
I’m not exactly sure what point the author is trying to make but I can very easily imagine a Chinese room answering those questions at the end; no matter how complex a question is a tape recorder playing the proper response and a wise and thoughtful person saying the same response have not shown any different level of understanding. That’s the key for the Chinese room, the question and answer don’t matter, as long as the person inside can do the right pattern matching and spit out the correct answer (or in the case of a lot of those questions, “a” correct answer) the outside observer is unsure of the person insides level of understanding.

Meanderingmoose:
I very much agree with you that the key to the Chinese room thought experiment is that the exact question and answer do not matter, as long as the person inside (or a processor, computer, etc.) can do the right pattern matching. What I was looking to highlight with these questions was that the complexity of the pattern matching required for any machine to be able to answer these questions suitably would be far beyond that suggested by Searle’s questions – in fact, to suitably answer questions of this type, the machine would need to be capable of human level pattern matching. This means that the man in the box (or processor, computer, etc.) would not be able to answer these types of questions by running syntactical instructions treating the words as tokens (simplifying greatly, I mean instructions like [if “hamburger” and “burned”, “eat” = False]). Instead, to accurately answer questions of this sort, a different sort of algorithm is needed – we need one that functions in a way far more similar to our brains than to our computers of today, with each word deeply represented in a web of concepts.

Extending the brain parallel, we could picture the man in the room having a page representing the state of each neuron in the brain, and another 10 pages for each neuron specifying the neuron’s update rule with respect to other neurons. He’d then go through millions of time steps, updating the state and the update rules at each interval to calculate the response. Now this system would be very slow – it might take a millennium for the man to make it through one time step. It’s difficult to imagine this system having any sort of understanding – but it’s also difficult to properly imagine things stretching thousands of times beyond our lifetimes. To move things into our timeline, let’s say we set up a computer system out of cells, with the neuron state information represented by the cell state, and the neuron update rule represented by connections with other cells. The system can now emulates the brain in real time, responding to all questions as a human might, fully passing any sort of Turing test. This is a large jump, but we’re still working with a system carrying out a set of instructions. Can you still very easily imagine this “tape recorder” having no understanding?

Reddit User Nameless1995:
Yes, sure. I just need to define and\or “understand” “understanding” in a way such that no about of behavioral and functional pattern manipulation becomes “understanding” unless there is an associated phenomenality or even more radically, something like as suggested here: https://www.newdualism.org/papers/E.Feser/Feser-acpq_2013.pdf [Author’s Note: feel free to give this paper a read if you’re curious, but note that it’s a frustrating combination of dense words and thin concepts]

But don’t ask me if I “understand” in this sense or not.

Meanderingmoose:
As that website’s name suggests, the article reads as Feser grasping for ways to build a platform under the ever-crumbling idea of dualism. I read up to page 7 when Feser started on the idea that other animals don’t have concepts, and that this ability somehow marks the difference between a truly immaterial faculty (suggested to be humans) and the purely material (other animals). There’s simply no such gap! We’re able to understand the world in a far more accurate way than other mammals, but dogs certainly have concepts too (ours are just much more detailed and refined). In taking a step back, many of Feser’s claims are revealed to arise from forgetting where we came from and mis-attributing a certain uniqueness to the human condition.

Nameless1995:
Yes, I don’t buy random claims about human uniqueness. I am also not sold on exact dualism (part of which seems to come down to how we use concepts). I won’t even argue about “intelligence” (which anyone can define as anything). Sure computers can be intelligent in some sense, as submarines can swim, and I wouldn’t be surprised if a major portion of the mind is intelligent like a computational process (90% of the things that happen I am not even conscious of. For example I am writing this text, and while it’s not a non-trivial language generation task — yet I am not even aware of what I am going to write, the language is almost auto-generated; I only have rough thought almost at the instant of the writing — I am not explicitly conscious of what I am going to write exactly. It wouldn’t be surprising to me if there is some statistical process at play that take into account rough grammatical structure automatically, and even the distributional hypothesis and what not.

So, there are different ways to “understand” what “understanding” is. It is indeed hard to say a machine does not understand something even if it can act as if it does, and indeed “utilizes” all the rules to “comprehend” the operations of a mechanism to even “understand” it.

Instead of arguing for what “true understanding” is, let me just use two different concepts: understanding1 and understanding2 [Author’s Note: emphasis my own], for now. Let’s say the way that a well-developed machine understands is “understanding1”. It’s possible much of our understanding is also in a similar sense. Some, like, Dennett may even argue that all of our understanding are. But what could be “understanding2”? Well, there can be a different way to understand understanding which may be more intuitive to some, and necessary for some to call an activity as understanding. Understanding2 can be thinking of understanding as the process of actively having the concept/unicept the “meaning-associatings” and such within phenomenal consciousness alongside the “it is like something to be-ness”. When I comprehend the meaning of 2+2=4 in an explicit way, I do it in phenomenal-consciousness by having a feeling of understanding of what is 2 and what are its implications, by composing the concepts under one united consciousness and so forth.

Meanderingmoose:
Let’s take a step back and look just at “understanding1” – and see the diversity within that concept. I’ll limit “understanding1” to any understanding we can describe at a mechanical level of function (want to avoid slipping into any “machine vs. human” distinction just yet), and we’ll consider the concept of understanding at its most basic (essentially an ability to generate a proper/useful/correct action – not necessary to get into a discussion on the distinction between understanding and intelligence here).

To start with the simplest case of understanding, we might say that a computer programmed to respond to simple arithmetic questions (e.g. “what is 2+2”) has a very basic “understanding1” ability. To an outside observer, it appears to understand the questions, but we know that on the inside, there’s no true understanding – there’s simply a circuit which has been set up in the proper way so as to give a correct answer. Taking a biological perspective, we might say that creatures like viruses and bacteria “understand” how to reproduce themselves in the same way – to an outside observer looking in, they seem to understand how to multiply themselves and propagate in the world, but we know that their genes and proteins are simply set up in the proper way so as to produce this behavior.

At the next level of understanding (still “understanding1”), we might have a computer program which is able to answer more complex questions, but only about a limited domain (think SHRDLU). Here, the concepts which the program understands will need to be deeper than before; to answer a question like “Pick up a green pyramid and put it on top of a brown cylinder. What object does this resemble?” the program can’t just have a simple multiplex setup to add the input digits – instead, it will need some type of hierarchical web of associations (especially if it is set up to be able to learn basic facts about its domain). This means that it would understand the brown cylinder object not as just a particular token, but as a token with “round” and “brown” and “straight” as attributes (with each of these attributes encoded in a similar type of web). While this type of machine could be as basic as SHRDLU, you could also imagine a more complex version (call it SHRDLU 2.0) which is able to learn by itself about how to form concepts within its limited domain, through a type of neural network approach. Taking a biological perspective, we might say that creatures like worms and bugs understand the world in the same way as these types of machines – they’re able to function in and learn things about their world (things like what concentration gradients suggest the presence of food, etc.), but they certainly don’t have any “understanding2”.

At the next level of understanding (still “understanding1”!) we might have a computer program similar to SHRDLU 2.0, but with more robust learning algorithms in place, and no limits on its domain (we’ll assume it has some types of sensors and effectors to receive input from and interact with the world around it). Let’s call this new machine SHRDLU 3.0 (and please note that this type of machine is far beyond what we’re currently able to construct, so try not to limit your visualization of it based on the current state). SHRDLU 3.0 would be effective in forming concepts based on its inputs to guide its actions in the world – for instance, it might form a general “object” concept, understanding that matter seems to group together in certain ways in the world – and then from this object concept might identify things like inanimate objects (don’t move unless moved), regular animate objects (things like water flowing, where there’s movement but it follows a pattern), and highly animate objects (things like animals, that don’t follow as clear of rules). I’m using these concepts as examples, but note that SHRDLU 3.0 would be forming these concepts itself (based on its internal algorithms) and would not be “given them” directly by any programmers. So far, SHRDLU 3.0 seems like just a more complex version of SHRDLU 2.0, with a broader variety / depth of understanding, but still very much “understanding1”. However, something interesting happens when we make the jump to SHRDLU 3.0. As its domain is not limited, the domain which its algorithms learn from happens to include itself. This means that, in the same way that SHRDLU 3.0 is able to identify the regularities of its world like objects and inanimate objects, SHRDLU 3.0 is also able to identify that there’s a particular always-present object (SHRDLU 3.0) in its domain – and that this object seems to be the “do-er” of the actions SHRDLU 3.0 makes. The hierarchical, associative web of concepts which SHRDLU 3.0 makes of the world and uses to guide its actions is not only much deeper and broader than that of SHRDLU 2.0, but also of a different nature, due to the fact that it includes itself (the hierarchical, associate web-making machine). Now, SHRDLU 3.0 is obviously an incredibly complex machine, so at least at first, its internal pattern recognizer won’t have a very robust pattern for it – again tying it back to biology, you could imagine this rudimentary SHRDLU 3.0 as having an understanding of its world similar to that of a mouse (or a cat, or a dog, depending on how robust you’re picturing the pattern recognizing abilities). If you imagine this program improving over time, however (for the sake of argument, we can assume there’s a whole bunch of SHRDLU 3.0s, and that they all vary slightly, and that they reproduce in a way where these varied traits are inherited, and that there’s limited resources on the world they inhabit, and that increased ability to recognize and incorporate the regularities of the world has survival and reproductive benefits…), you could see how SHRDLU 3.0’s (or let’s call it 4.0) would acquire increasingly powerful abilities to make sense of the world they inhabit, and of their own actions within that world. At a certain point, their ability to recognize the patterns of others of their kind might get so good (especially if they evolved to be social creatures) that they’d be able to form an abstract system for labeling and referring to certain concepts (language).

This turned into a bit of a long journey, but at the end of it, we’ve demonstrated the “understanding1” abilities of SHRDLU 4.0 – an algorithm which uses language to talk to other instantiations of algorithms, which refers to itself in the first person, and which, if you asked it (in its own SHRDLU 4.0 language) whether it had “understanding1” or “understanding2” would say it most definitely had the latter. It would have a name for itself, and it might even be interested in philosophy, wondering about questions like “how did I get here?”, “why am I conscious?”, and “could a machine ever be conscious?”. From the outside, we wouldn’t be able to tell whether there was “understanding2” in there or not (just as we can’t tell for other humans) – but I would believe SHRDLU 4.0, wouldn’t you?

Author’s Note: If you enjoyed this post, please consider subscribing to be notified of new posts 🙂

Loading
4 1 vote
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments