Defining Intelligence

Dedicated to Sam Rendall – you and your ideas are greatly missed. 

What is intelligence? What does it mean to be intelligent? A starting point for a very high level definition might be an ability of an agent to accomplish its goals in its environment. From this definition, we can see three pieces to assessing a particular agent’s intelligence – the complexity of its goals, the complexity of its environment, and its level of success in accomplishing its goals. Human metrics for assessing intelligence generally focus on the third point, standardizing both environment and goals (e.g. a standardized IQ test). We’ve come up with ways to measure success across a wide variety of goals (such as spatial, logical, and emotional objectives, as well as many more) – and research has shown that there is a significant correlation between these different types of intelligence. This correlation points to a unifying driving variable behind these different types of intelligence, which has traditionally been labelled g. What does it mean to have a high g? From a high level perspective, it means persons with high g’s are better able to make sense of the world we live in and identify patterns / draw parallels between situations. From a low level perspective, it likely says something about how the brain is hardwired and how easily various areas connect. However, these are very human-focused views of intelligence. What is the root concept? How did there come to be this distinction of intelligence / consciousness vs. the rest of the matter in the universe? These are the interesting questions which we’ll explore further throughout this essay. We’ll start with a history of “intelligence” (quotation marks applied as the meaning of the word may be stretched), then dive into what human intelligence looks like and what the main drivers of variance may be. From there, we can entertain some ideas of how intelligence might look in the future – how much possibility is there to craft AI systems that work similarly to us? It seems to answer this question it will first be critical to establish exactly what it is that we’re doing which deserves to be labelled as “intelligent” behavior.

The most important realization coming out of the past few centuries of scientific exploration is that our universe seems to be governed by some set of natural laws. While we don’t have full visibility into these laws at their most granular level (and may never), we’ve made significant strides in building up an understanding that reaches to the level of subatomic particles. Certain fundamental laws (too complex to be explained here – and beyond the grasp of my understanding) govern the behavior of fermions and bosons, and from those laws we can (in principle, mostly) specify the behavior of more traditional particles (i.e. protons and neutrons), leading to a theory of the atom. Chemistry then takes over at this point, providing a theory of the atom and the atomic interactions that produce molecules – however, again, in principle, these interactions could be explained based on the fundamental laws. These laws of chemistry lead to the behaviors observed in biology, and biology leads to other, higher level laws, governing group behavior and adaptation (and, in the case of humans, the laws of economics). I’ve been using the word “law” here to describe these scientific learnings, but another way to look at them is just as an identification of repeating patterns / a specific type of order. The key takeaway is that, due to the fundamental laws governing the behavior of our universe, order and regularity springs up at any level of observation, from subatomic particles, to atomic particles, to atoms, to molecules, to groups of molecules, to organisms, to societies. It is this order that has created the possibility for intelligence – which can be viewed as any means of “taking advantage” of this order (“taking advantage” here assumes an agent-based viewpoint, which is required for “intelligence” to be a meaningful idea – take a view on the universe from the particle level and all you’ll see are fundamental laws at work, with no intelligence. Luckily we live at the agent level of experience). 

An aside on entropy (“Just what really is entropy?”)

A review of the concept of entropy may be of use in understanding what it means to “take advantage” of the order of the world. If you Google “what is entropy”, the first definition to come up is “a thermodynamic quantity representing the unavailability of a system’s thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system.” Put in (hopefully) simpler terms, entropy is a measure of a systems potential ability to act in a specified non-random way. This seems somewhat straightforward from our vantage point, but what about when looking at entropy from the point of view of a particle? How does a particle act in an “entropic” way? The law of entropy as laid out doesn’t actually apply at the particle level – all it says about the particle is that it will act according to the laws of physics. The law is an emergent property at the next hierarchical level – as it states that particles (in a closed system) acting according to the laws of physics will, over time and on average, move towards a random distribution. “Taking advantage” of the order of the world is the same type of emergent property – it is meaningless when viewed from the level of particles (i.e. the levels of physics or chemistry), but has explanatory power at higher levels in the hierarchy. We’ll find that “intelligence” is the same type of emergent property – meaningless at the level of particles, but critical for understanding higher level behaviors.

How did organisms first start to “take advantage” of the order of their environment, and how did this lead to the type of intelligence we see exhibited today? Some degree of intelligence can be attributed even to the first replicating agent, as this agent would have been meeting its “goal” of reproducing within the context of its environment (compared to other, potentially similar arrangements of matter which did not accomplish this task, and remained inert). Adaptations to better take advantage of the order and patterns of its environment would have arisen through a process of natural selection – it is relatively easy to imagine a particular replicator incorporating a reaction which improved its odds of splitting into two when in the presence of a chemical that generally occurred in areas with favorable “reproductive” conditions. This type of “intelligence” is obviously very different than the type generally associated with the word (which is generally a more abstract form, with neurons and their connections representing certain patterns of the world) but the label makes some sense for both – in many ways, the behavior humans exhibit and the simple environmental fit of early replicators can be viewed on the same spectrum. 

The type of intelligence we’re more familiar with would have arisen as organisms became larger and needed ways to propagate signals throughout their bodies / better capture the patterns of the world. The neuron cell provided a way to do this by sending an electrical / chemical signal to other neurons, which could then directly influence the behaviors of other cells in the body to drive particular actions. I’ll skip over a detailed view of the exact evolutionary process (as it’s not fully known and is less important) and focus instead on the idea that an organism with these types of cells (properly integrated) would have an advantage in its environment. Neuron type cells allow for more complex behaviors (such as coordinated body movements to take evasive action or chase food) and a more abstract / hierarchical means of capturing patterns, and thus can be viewed as opening up a huge expanse of new levels to the potential evolutionary landscape. 

One key addition to this evolutionary landscape was the possibility for organisms to “learn” by having update processes for the neuron connections which changed them based on “good” or “bad” experiences in the world. Again, I’ll skip over the exact evolutionary route by which this update process could have come into being – instead we’ll focus on what it meant for the progression of intelligence. At this point in history, the organisms we’re considering would have had neurons integrated into their bodies and contributing to driving their actions (though without any centralized control center like a brain), and these neurons would have had the ability to change the types of responses to certain stimuli based on the results. Evolutionary gain could be had either through new initial arrangements of neurons that drove more effective behavior, or through an improved update system that allowed an organism to better adapt to its environment during its time on earth. In general, improvements to the former would be best suited for environments that were relatively homogenous (from a generational perspective), while improvements to the means of learning would be most critical in highly heterogeneous environments where an organisms progeny might face drastically different conditions than the organism did. It’s important to remember that all survival and reproduction gains have been due to the incorporation of the regularities of the world into organism behavior – neurons and their resulting benefits are no different, but they do allow for a “higher level” of regularity to be captured. For example, in a single cell organism, the regularities dealt with are generally at the level of chemistry (e.g. the cell wall is made in such a way to insulate the operations inside while still allowing the transfer of necessary chemicals), while neurons allow for adaptation to regularities such as a specific tactile sensation or a particular pattern in the field of vision. 

Neuron structures continued to evolve, allowing for increasingly adaptive behavior – and eventually a seemingly more optimal arrangement of having a “central processing area”, aka the brain (again, going to skip the exact details of how this would have occured), to control behaviors evolved. This adaptation allowed for the capture of even more complex patterns / regularities, as learning and synthesis could occur between all parts of an organism’s behavior. Larger, more differentiated brains continued to be a sound evolutionary strategy (as sound as anything could be from an evolutionary perspective – who knows if mammals could have succeeded without that asteroid?) and relatively straightforward (again, from an evolutionary perspective) growth of intelligence happened from here. Organisms continued to adapt to their environments in more robust and complex ways – gaining the ability to identify friends and foes, take coordinated evasive actions, trick their predators, memorize particular landscapes (including food and water locations), build homes, etc. Again, important to stress that these abilities all stem from the ability to recognize and properly react to the regularities of the world – these abilities just happen at a vastly higher hierarchical level than the “abilities” of a single cell. While the progression up to humans appears to have been relatively straightforward, some critical tipping points were reached during the evolution of homo sapiens, and these deserve a deeper look.

The two tipping points most critical to human evolution were the development of language, and the related ability to pass along learnings between generations (“culture”). These abilities required the emergence of a new type of intelligence – an ability to interpret the signals of others as not just as a direct indicator of something currently existing in the environment (e.g. how bees “dance” to represent the location of food or predators, or how certain apes “yell” to alert the group of the presence of a particular predator), but rather as an abstract representation of something currently existing in the utterer’s brain. This is an important point, so let me take a step back and ground it in an example. Consider the process of identifying the location of a food source. For animals of “lower” intelligence, this ability to share information is directly built into them by evolutionary means – in the simplest example, an insect might let out a certain chemical or noise when in the presence of food, and the other members of the group then follow it due to their biological hardware. There’s no “thought” about what to do – the animal just acts according to its innate drives / mechanisms, which guide it towards food when in the presence of the signaling chemical. Note that there’s no way in which an insect could “think about” food when not in the immediate presence of food / the signaling chemical, as its relationship to / brain activity about the object is directly dependent on the object’s physical presence in its immediate environment. Moving up the intelligence hierarchy, consider how a dog identifies a food source. The same types of drivers that influenced the insect still apply, but now there’s another level of associative learning that the dog’s brain is capable of. Pavlov was one of the first to dive into this type of learning, with his experiment involving feeding dogs while ringing a bell. After a while, the dog learned to associate the bell with food, and so the dog would salivate just at the sound of the bell – even if no food was present. This ability reveals some flexibility in brain function – the dog has concepts of food and of noise, and can recognize that these concepts generally occur together in its environment. While the insect has no ability to “think about” food, a dog can “think about” food, in a sense, when it hears a bell (assuming it has been conditioned to – or, in more normal circumstances, might “think about” food when it sees its owner walking towards where the dog food is stored). However, this ability is still very dependent on environmental proximity – there needs to have been repeated instances of the stimulus and the “thought” of object / outcome occurring together, which necessarily requires them to be related in time and space. This is why our communication with dogs is limited to simple commands such as “fetch” or “sit” – the dog is capable of associating a particular sound with a particular immediately desired behavior, but not capable of understanding that the sounds are representative of particular thoughts in the human’s brain which they’re trying to convey. Humans, on the other hand, have made that leap of cognition. We can represent food or a food source (as well as infinitely many other concepts) using particular sounds, as those sounds are interpreted by others not as representative of anything immediately present in the environment, but as an abstraction currently present in our mind. When you read the word “apple”, you don’t look around for an apple (although you might get hungry!), you instead recognize that I’m trying to convey the generic concept of “apple” and you bring that concept to mind, to interact with the other concepts being conveyed. This is symbolic thinking, and it is the primary ability that has allowed us to carve out our own level in the hierarchy of earthly intelligence. The details of how we may have attained this ability are best left for a separate essay, but for a very detailed exploration check out Terrence Deacon’s “The Symbolic Species”. 

We’ve now covered (in a very quick four pages) the evolution of intelligence, and hopefully now have some more solid, shared ground to stand on when trying to understand what intelligence is. At its root, intelligence is a concept that we ascribe to life – and as the central goals of any kind of life as we know it are to survive and to reproduce, general intelligence corresponds to an organism’s ability to take advantage of the order of the world to better survive and reproduce. This is the general definition – but the tipping point just discussed allowed for intelligence to be used in novel ways, as brains became powerful enough to incorporate themselves and other brains as part of the recognized order of the world. An insect’s model of the world contains very simple concepts like food and danger, with essentially mechanical responses depending on the observed concepts; a dog’s model of the world contains more complex concepts like friend, angry, and pain, with an ability to associate these concepts with each other; and a person’s model of the world stretches out far enough to capture the fact that they have control over their actions and communications and that others do to. (There’s a certain circularity here – our brains take in all the sensory information we get from the environment to determine actions, but our brains are also part of that environment, as are the actions driven by the brain. It is this type of circularity that the philosopher / cognitive scientist Douglas Hofstadter deems a “strange loop”, and while we don’t delve into the origins of consciousness here, his writings and the writings of Dan Dennett are the best sources I’ve found for that question.) This more robust model of the world has granted us the ability to think symbolically, using words to represent ideas ranging from the most simple to extremely complex, and has allowed us to escape the “solitary, poor, nasty, brutish, and short” lives of our fellow creatures so as to give us time to explore the types of ideas discussed here.

So we’ve (in the broad sense, treating all life as “we”) evolved from some unique auto-catalytic reactions to abstractly thinking creatures that can reason about our universe. That’s a fantastic journey, but naturally raises the question – what’s next? Is there some higher level of intelligence we’ll reach through further evolution, or do we (again, in the broad sense) need to wait for another cataclysmic event to wipe out humans and create an opening for a new type of higher intelligence life to evolve (maybe the cephalopods are next!) Luckily for us (or maybe unluckily – will treat the ethics as out of scope here), our abstract thinking has given rise to further exploration of the workings of the world, and through this exploration we’ve identified ways to create artifacts (computers) that rival our brains in complexity and will likely one day allow for a type of artificial intelligence. I hope this essay has gotten you more comfortable with the possibility of higher level intelligence (requiring a more robust model of the world and expanded symbolic thinking) and helped you form some ideas of what artificial or higher level intelligence might look like – these ideas merit an essay of their own, so I’ll cover my thoughts on the path forward and potential implementation strategies in my next essay, “How can we build a brain?”. 

Author’s Note: If you enjoyed this post, please consider subscribing to be notified of new posts 🙂

Loading
0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments