Exploring Mindspace and General Intelligence

Intelligence is notoriously difficult to define, in part because of the variety of ways in which we see it manifested in the world. From humans, to mice, to ant colonies, to bacteria, to computational algorithms, we see intelligence all around us, but it’s difficult to put a finger on a single definition which covers all cases. In lieu of a comprehensive definition, we can make some progress in our understanding by dividing up intelligence into different types. Keeping in mind that the lines between these types are somewhat blurry, intelligence can be divided as follows:

  • Instinctual / programmed abilities and responses (what an agent is capable of without any experience in the world)
  • Learned understanding of the world (what an agent becomes capable of by itself over time with experience)
  • Learned understanding of the ideas inside the minds of other agents (what an agent becomes capable of over time after learning from other agents; this is the main step which differentiates humans from other animals, though some other species may be able to communicate ideas to a degree)

In the remainder of this post, we’ll first take a deeper look at how to think about each of these types, and will then use the framework to explore the space of possible minds (“mindspace”) and identify potential paths forward.

Instinctual / Programmed Abilities and Responses

This type of intelligence involves no learning; instead, it involves systems which are structured in such a way as to effectively handle the environment they are placed into. For humans, our reflexes are a good example of this type of intelligence. When something foreign comes close to your eye, your eye shuts – this happens because of a particular “hard coded” neural circuit, without any reliance on or connection to anything you’ve learned throughout your life. It may seem strange to label this behavior as intelligent with reference to humans, but we frequently do so for other species or entities. Bees are intelligent for targeting flowers, ants moving toward food, worms for coming above ground when it rains, viruses and bacteria for reproducing at the appropriate times, and computers for performing complex calculations. In each of these instances, the exhibited behavior is directly coded for in the configuration of the organism / system; no experience is required. In theory, any sort of behavior could be coded for directly, but in practice we find the coded-for behaviors to be useful, because they’ve been run through the gauntlet of natural selection (or human selection, in the case of computers). For this type of intelligence, there’s a direct relationship between the structure of the agent and the behaviors the agent is capable of, without any dependence on the environment in which the agent will exist. The usefulness of a specific agent’s capabilities is dependent on the environment, but the range of these capabilities is not (e.g. an ant placed on Mars would continue to act like an ant, though this behavior will be far less productive).

Learned Understanding of the World

The development of complex brains allowed a new type of intelligence to form, one which involved capturing certain regularities of the world in the neural substrate and leveraging these regularities for effective action. This mode of intelligence is especially evident in mammals, but can be seen in other members of the animal kingdom as well. As mentioned previously, the lines between these types are blurry, and so from a certain point of view, even extremely simple organisms like C. Elegans can be viewed as exhibiting this type of intelligence (for example by adapting to react less to taps on their petri dish over time). However, the focus here is on the far more extensive abilities of creatures like mice. A mouse can learn a great deal about the world, over time coming to associate certain locations with predators, other locations and actions with food, etc. The mouse develops a model of the world in its brain, and this model informs its behavior and allows it to better accomplish its goals. In contrast to the instinctually intelligent ant, a mouse placed on Mars would modify its behavior accordingly, and if there was air to breathe and food to eat the mouse might very well figure out how to survive. For this type of intelligence, the behaviors the agent is capable of are dependent on both the structure of the agent and the environment the agent exists within (and the experiences it has had in that environment). Machine learning systems like GPT-3 and AlphaStar exhibit this type of intelligence, learning the regularities of the world through exposure to certain aspects.

If the mouse’s environment includes other mice, the mouse will incorporate their actions into its model, but critically, it won’t be able to include any part of the models formed by the other mice. Each mouse forms its own independent model of the world, and though these models overlap significantly, there’s no way for information to be exchanged directly. If one mouse knows that a certain area is dangerous, it can’t share that knowledge with another mouse (though, in the event both mice end up in that space, the knowledgeable mouse may alert the other indirectly by demonstrating its fear). 

Learned Understanding of the Ideas Inside the Minds of Other Agents

As we just saw with the mouse, the ability to learn from the world is, for many systems, a solitary endeavor. Over the course of its life, the mouse might learn a great deal about the world, but when it dies, that model is lost, and other mice are left to build up their own models from scratch. For certain types of minds, however, there’s a way to share internal knowledge and make learning about the world a more communal pursuit. As discussed here, humans have leveraged this ability to far surpass the intelligence of other species. We’ll focus on humans here as it gives us a tangible example, but keep in mind that this capability is not in principle limited to human types of minds. The evolution of this ability to share parts of our internal world models (i.e. to share understanding) represented a step change in intellectual development. Prior to the development of this capability, knowledge was limited to what could be learned in a lifetime. Each organism developed its own world model over the course of its life, and regardless of how robust or accurate this model was, all insights were lost upon death. As humans unlocked the ability to share parts of our internal models, it created an opportunity for ideas to perpetually develop, untethered to any particular human mind. For example, it likely took many generations for language to progress from its initial stages to anything we could recognize as such today, but once developed, a baby could then pick up this ability from others in a matter of years. 

One way of viewing this change is as creating a repository of ideas existing separately from any individual human. Each human has the ability to tap into this repository, and potentially to contribute new ideas to it, but the repository itself exists in a distributed way across all human minds, and continues on after any individual leaves this world. Richard Dawkins coined the term “meme” for these ideas in his book The Selfish Gene, and pointed out that, much as genes evolve over time independently of any organism, memes evolve over time independently of any single brain. Good ideas from one mind are taken up by others, and in doing so open the door for these other minds to build on these good ideas with even better ideas, which then get passed back into the repository (through sharing).

For an example of this process, consider Albert Einstein. If you dropped Einstein off on Mars as a baby (again, let’s assume access to food / water / shelter), he wouldn’t even learn to speak, let alone develop any of his famous equations. However, Einstein grew up on Earth, where his mind was surrounded by other minds which housed the collective repository of human knowledge, built up over thousands of years. Tapping into this repository, Einstein learned much about the world very quickly. Once he had learned all there was to learn (in his particular field), he applied some of the techniques he had learned to identify gaps in the edifice of knowledge and to wrestle with how to address them. Fortunately, the download of ideas had shaped his mind in such a way as to allow for progress on these hard problems, and he came up with many revolutionary ideas, among them the fact that e=mc2. Once he discovered these ideas, he contributed them back into the body of knowledge (by communicating them with others), where they were easily picked up. Far more time and effort was required to develop the idea of e=mc2 than is required to understand it now that the idea exists (this disparity of effort feels strongly analogous to the idea that p != np).

For this type of intelligence, the behaviors the agent is capable of are again dependent on both the structure of the agent and the environment the agent exists within, but with a special dependency on the ideas which exist in the environment (in the minds of other agents)

In focusing on humans, we’ve missed an additional constraint which is relevant when extending these ideas further. For the third type of intelligence, ideas need to be communicable in addition to existing. We humans all have similar minds, and the repository of our ideas has been crafted with this particular type of mind as the target, making for easy communication – but we can imagine other intelligent agents who are unable to “tap in” as easily. Humans also have an instinctive tendency to pay attention to other humans (think of babies and infants making eye contact), without which it would be significantly more difficult for ideas to be passed on. Revisiting our dependencies, we can say that, for this type of intelligence, the agent’s behaviors are dependent on the structure of the agent and the environment it exists within, with a special dependency on the communicable / understandable ideas which exist in the environment (from the perspective of the agent). It’s not clear how limiting this condition is (we haven’t had much success sharing our ideas with other animals, but other animals are a very limited subset of possible intelligences), but it has potential to be a powerful constraint.

With this framework laid out for understanding the types of intelligence, we can now begin exploring the space of possible minds (using “mind” in the broad sense, to indicate any type of intelligence). We can think of the types of mind as a hierarchy; instinctual / programmed minds are some subset of possible arrangements of matter, minds which learn from the environment are some subset of instinctual / programmed minds, and minds which learn from the ideas of others are some subset of minds which learn from the environment.

Looking first at the possible instinctual / programmed minds, we see that, as described above, evaluation of their intelligence is dependent only on their structure. There’s no ability for these minds to learn – instead, they’re structured in such a way as to exhibit useful (or non-useful) behaviors. For any particular desired behavior, we can likely find a mind structure which will carry it out, but its effectiveness will be limited to that particular situation and won’t generalize. 

Moving onto minds which can learn, we see dependencies on both the mind’s structure and on the environment it exists within. Learning is only effective in the event the environment contains useful regularities to learn, and so the intelligence of these types of minds varies along two axes. The additional axis brings with it a huge degree of possibility, as the effectiveness of any particular mind now must be evaluated over all possible environments. 

Finally, we come back to the most interesting type of intelligence, that which is dependent on the existing ideas of others. Here, we can see that the dependencies on structure and environment remain, but there’s now the additional dependency on existing ideas and communicability to consider. Shared brain structures have allowed humans to avoid any major issues with communicability, and as our environment and brain structure have remained relatively unchanged over the past 10,000 years, our advances are primarily due to accumulation and refinement of ideas over time. This dependency on existing ideas can be viewed as a third dimension, one which has grown quite large for humans, but without resulting in any gains for other species (due to lack of communicability). 

As we look toward building more intelligent machines (with a focus on the human type of intelligence), we’ll need to further consider what is required to “tap in” to the human repository of knowledge. In our human experience, it feels trivial to gain access – we’ve been fully emerged in human knowledge since before our earliest memories formed. The repository has been crafted for us, by us, and as such it makes sense that the ideas are assembled in a way which makes human assimilation quite easy. As mentioned previously, this repository can be viewed as having “evolved” through a type of “natural selection” of ideas, and so those which didn’t “fit” our underlying brain architecture would have lost out to those which did. This process has resulted in us developing a great deal of shared knowledge about our world, in a form which nearly any human baby can begin downloading from day 1. 

However, any intelligent machines we build will not have the same benefit of a shared neural architecture. It seems likely that we’ll discover general principles of intelligence (perhaps the neocortical algorithms) long before we fully understand the brain (with all the complexity that sits in the motivational, emotional, and motor function systems) – and as such, the first generally intelligent systems we’ll build will be far from human. While these systems will surely have more processing power behind them than we do, isolated intelligence will not get very far. Unless these systems are able to “tap in” to our repository, they’ll be extremely limited in what they can accomplish (much as a human would be growing up alone on Mars). As covered here, we’re limited in the degree to which we can “build in” our knowledge into the machines (at least if we’re aiming for general intelligence – our knowledge can be “built into” instinctual / programmed systems quite easily), and are instead reliant on their structure being similar enough to access our shared repository. Interestingly, it may turn out that generally intelligent machines need to be a good deal like us!

4 1 vote
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments