Consciousness is, and may always be, a slightly thorny subject to discuss. For many people, no physical explanation is suitable, as it never seems to address questions along the lines of “but why does it feel like something to be me? I can think and see things and do things – we need to explain that “I”, not just how our brains do what they do! Where does that “I” come from?!” These questions are hard to address directly now, with our limited understanding of how the brain works – but even in a future scenario, where we fully understand how the brain does what it does, skeptics could still jump to these types of questions to draw doubt upon any explanation. The crux of the issue seems to lie in a difference of domain – we all naturally feel ourselves to be something different than our brain function, but all explanations rely on describing the phenomenon as brain function, at the level of neurons and synapses and electrical currents / chemical reactions. This intuitively feels wrong to us – akin to describing the sound of music as specific types of sound waves, or the sight of a rainbow as specific wavelengths of light. There seems to be something more, something a physical description can’t capture, something that exists “outside the system”. In this post, I’ll attempt to shed some light on why, contrary to this feeling, it’s actually what’s “inside the system” that matters.
To start, it will be helpful to take a step back and establish the domain within which we’d like to understand the phenomenon of consciousness. To the best of our understanding, our universe is made up of particles of matter, and these particles can interact with other particles in various ways, according to a fixed set of laws. There are varying levels of energy spread throughout the entire system, residing within the bonds of these particles, in their motion, and in the particles themselves – and this energy influences their ability to react (according to the fixed set of laws). There are enormous complexities I’m skipping over here, but generally, this seems to be a fair definition of the domain we’re working with, at least for the purposes of this discussion. While this domain may seem simple from the above definition, it certainly is not – the details I skipped over included the 12 elementary particles, the 100+ types of atoms, and the essential infinity of possible molecules (with tens of millions of types on earth alone). Staying within the described domain, we’re able to explain life (a particular organization of these particles resulting in persistence and replication of that organization), evolution (the outcome of many competing instances of life in an area with a limited amount of energy / matter), and the brain (an adaptation of life that extracts statistical regularities from external stimuli and “uses” them to inform the actions of the body). However, as we saw earlier, consciousness does not seem to cleanly fit into this domain – we know it’s related to the brain, but seemingly in a different way than fits into our specified domain. To try and better understand where it fits, let’s take a deeper look at what it is that the brain is doing.
As mentioned above, the brain can be viewed as an artifact which receives inputs from the environment (the form and detail of this input varies greatly by organism, but generally encompasses touch, smell, sight, and hearing) and creates a sort of “model” of the world based on regularities, then uses this model to inform actions (with the chosen actions serving as another kind of input). Thinking about a worm and its brain / nervous system may be helpful here. The worm’s nervous system is a simple one, with only 302 neurons (using C. Elegans as the example) – these neurons make up about ⅓ of the worm’s total cell count, with 68 sensory neurons (which detect signals from the worms environment and send them to the interneurons), 121 interneurons (which act as the intermediary between the sensory and motor neurons, and also have myriad connections amongst themselves), and 113 motor neurons (which directly control the worm’s muscles). I list these numbers not to get overly specific but rather to show you the relative simplicity of the system, so you can get a sense of the difference in scale vs. a human brain (with its 86 billion neurons). Even with its small nervous system, the worm is capable of learning about its environment and creating a model of the world in a number of ways. When the worm is in a petri dish, it will move away if the petri dish is tapped (as this is a helpful reflex to avoid danger), but with repeated taps, it will move away less and less, as the connections between its neurons are updated to reflect the fact that the taps are now a part of its environment and don’t need to be avoided. The worm can also learn to associate particular chemicals (e.g. Na+ or Cl-) with the presence of food – a control worm shows no preference for either chemical, but a worm that has been fed in the presence of Na+ will begin to move towards Na+ gradients, as the neuron connections are updated to better reflect the state of its environment by associating food with Na+. If you’ve found these examples interesting, this article (http://learnmem.cshlp.org/content/17/4/191.full.html) is very much worth a read (and was my source for these two examples). Just as the worm’s simple nervous system is capable of incorporating petri dish taps and chemical gradients into its model of the world, more complicated animals with more complex brains are able to create more robust models of the world. Certain “higher” animals like rodents, bats, and monkeys are even able to form accurate representations of other animals, objects, contexts, and places – the “place” functionality is even known to arise from a specific type of neuron (known as a “place cell”) that represents particular locations (and allows the mouse to solve all those laboratory mazes!) The neural activity required to support these models is obviously far more complex than what happens in the worm, but the same idea applies – the brain takes advantage of the fact that there are certain regularities in our world by “modeling” these regularities (at varying hierarchical levels) and using them to inform behavior.
As briefly touched upon above, this concept of “modeling the world” gets much more interesting as the scope and accuracy of the model improves. The human brain is able to recognize the order and regularities of the world at a very high level, resulting in understanding of a huge variety of objects, including other people, to whom it is able to attribute particular traits and tendencies. The brain exhibits great predictive powers based on extending its model forward, and also great learning abilities, demonstrating significant model plasticity. The most interesting part, however, is that the brain’s model of the world includes itself! The brain, and the body it controls, are a (major) part of the environment which the brain attempts to model. This may seem obvious and uninformative – but it actually holds the key to understanding our feeling of consciousness.
Before exploring this idea further, it may be helpful to go over what the process of “modeling the world” looks like for the human brain. As a very simple example, let’s consider the experience of seeing a dog for the first time – upon observing the novel creature, the brain creates a concept of “dog” (and will attach the word “dog” to it, if the person is informed of the label) and this concept includes all the observed attributes of the dog: the appearance of four legs, hair, a tail, etc; the actions of barking, playing, fetching, etc; and the personality of friendliness, curiousness, etc (assuming, for all these items, that they were observed in said dog). The brain has taken input from its environment and used it to create a particular concept, making its model of the world more complete – essentially saying, “I’ve seen this thing, so I know this type of thing can exist in the world – so it makes sense for me to have a concept for it and recognize it, as there’s a chance of seeing it again [and a much higher chance than seeing all the types of things which I have not observed, as there’s order to our world]”. As more and more dogs are observed in a variety of settings, this initial model is refined to incorporate the new knowledge of how the world works. Obviously, a dog is pretty far up the hierarchy as far as abstractions go – any brain successfully incorporating the concept of a dog would first need to be acquainted with objects (the most general type of regularity – the fact that certain groupings of matter tend to remain together and act in a consistent way) and animals (the regularity of there being formations of matter that seem to have goals and aims and act accordingly). A similar type of process is followed for all the other concepts we learn as part of being human, with concepts building on each other through context – if a particular concept shows up frequently with another concept, our brains are able to recognize that and “tie them together” – with the symbolic meanings required for language as our crowning achievement in that area (we’re able to learn to associate particular sounds with particular concepts in our environment).
But as mentioned earlier, one of the concepts which our brains must form is the concept of ourself! The brain, and the body it controls, are a central part of the environment which the brain is trying to model. Similar to our above example, the brain sees (and hears, and smells, etc.) its body doing things in the world (yes, it’s also the one making it do those things!) just like it sees the dog doing things – and so a concept for that body is formed. Conveniently, this body happens to be in the brain’s environment a lot, and so the brain sees it in many different contexts and can model it with great accuracy. Think of how strong of a mental model you have of your best friend – when speaking with them, you often know how they’ll respond, you know what makes them angry, can tell when they are sad, and generally feel “in tune” with them. Our brain has great abilities to model its environment, and has particularly strong abilities to model other agents (i.e. your best friend, humans, dogs, etc.) – so when it turns these abilities to model the most important agent in its environment (itself) it’s able to create an incredibly accurate representation – in fact, from our point of view, it’s a perfect representation – as this representation is us. As discussed earlier, consciousness seems to be in a different domain than the rest of the world – and indeed it is. Consciousness is what it feels like to be modeled by your brain – we’re not “outside the system”, but rather caught up in a strange loop inside it.
Author’s Note: If you enjoyed this post, please consider subscribing to be notified of new posts 🙂
Certainly it is an observable behavior that organisms react to and model their environment, and certainly it is an observable behavior in humans that they react to and model their own internal biology (I’m talking about their brain), but the notion of an organism reacting to itself in no way explains the sensation of consciousness.
Thanks for reading! I agree with you that it’s not a rigorous scientific explanation of the phenomenon, but I do think it’s a helpful and interesting way of looking at things. I especially find interesting the conclusions one can reach when looking at this “self-modeling” from first principles (the below is taken from another one of my posts here). To start with the simplest case of understanding, we might say that a computer programmed to respond to simple arithmetic questions (e.g. “what is 2+2”) has a very basic “understanding1” ability. To an outside observer, it appears to understand the questions,… Read more »