Exploring the Limits of Intelligence

One important concept in data science is that of signal. Signal represents the ability of data to inform accurate predictions in the target domain; a dataset that can tell you more has more signal. For example, say you’re trying to predict the weather in New York for the next day (repeatedly). For the sake of the example, we’ll say New York is sunny 50% of the time, cloudy 30%, and rainy 20%. A naïve guess (based on no data) might be to always say the weather tomorrow will be sunny (as that’s the most common option), and that guess would be right ~50% of the time. Alternatively, let’s say information on today’s weather is also provided. Now the best approach might be to predict that tomorrow’s weather will be like today’s. If this approach results in 60% accuracy, then the data on today’s weather contains signal – a connection exists between today’s weather and tomorrow’s (though the exact manifestation of this connection is harder to parse out). We can imagine another dataset that contains temperature and pressure readings from sensors across the country, which, when run through some algorithm, allows us to correctly predict New York’s weather for the next day 80% of the time (though this may not be the limit – perhaps more advanced algorithms could achieve 90%, if the connection between sensor data and weather is that strong). This dataset contains even more signal – a robust connection exists between the temperature and pressure throughout the country and New York’s weather.

Much like data science, intelligence also requires signal. Just as there are regularities in the relationship between sensor readings and the weather, there are myriad regularities of the world which can be captured and leveraged by intelligent life. A worm can follow specific chemical gradients to find food because, on average, food is more likely in that direction. A dog can chase a ball because the photons reflecting off the ball hit its eyes and provide information on the ball’s location. A human can build a quantum computer because entangled particles behave in consistent ways. In a random universe, there would be no room for intelligence, as there would be no regularities, with nothing for intelligence to “latch onto”. Luckily (or, more accurately, necessarily) for us, our universe appears to be quite regular from its lowest levels on up (with more statistical regularities at the higher levels). Expressed differently, our universe provides plenty of signal.

Organisms have evolved to take advantage of these regularities, with the evolution of brains accelerating the process. Brains serve as a sort of “regularity capture net”” which allows organisms to learn about and utilize the regularities they observe in the world (before brains, regularities and the associated advantageous actions needed to be encoded directly in the genome, a much slower and less flexible process). Human brains have become particularly adept at this process, and have also benefited from the (mostly) uniquely human ability to share captured regularities with others through language, leading to our powerful abilities to extract signal from the world around us. Interestingly, our success has changed the dynamics of the environment from which we’d like to draw signal, with our primary focus turning to recognizing patterns in the behavior of others. Achieving our goals requires, more than anything, an understanding of people (to the extent possible).

While humans (and our brains) are good at capturing signal, there’s no reason to believe other thinking systems couldn’t do better. The pressures of evolution have certainly guided our species in the direction of increased understanding, but evolution does not readily have access to all potential designs. Even among humans we see a great deal of variety in capability, so it seems at the very least the factors leading to those differences could be pushed further. Nick Bostrom makes this point in his book Superintelligence, providing the framing shown below: 

However, although the ability to capture signal may increase arbitrarily, the amount of signal itself is fixed. When predicting weather for New York, we saw that more advanced algorithms might be able to extract more information from sensor data, but the prediction accuracy would still never get to 100%. The sensor data simply does not capture all of the underlying dynamics of the weather – the accuracy will necessarily max out at some level below 100% (due to the chaotic nature of weather formation). The key question is how much signal exists for intelligence to “latch onto” in our world (i.e. how powerful could a hypothetical superintelligence be).

Before exploring that question, some clarification is required. Signal, on its own, is a meaningless concept – we can’t assess how much signal historical weather data has unless we know what we’re trying to predict. Similarly, we can’t say how much signal exists in the world without defining a bit more exactly the intended goal. For the purposes of this analysis, we’ll consider the goal to be general success (i.e. money, influence) within the domain of human society. While a very narrow goal, this framing will allow for better focus on the idea of information availability. In reality, a superintelligence would likely instead transcend this limited domain (perhaps by identifying powerful new laws of physics or constructing nanomachines), and as such this argument makes no case against the dangers of superintelligence (which seem to be real, though I’d argue not yet on the horizon). Rather, this analysis will serve to examine the limitations of intelligence in a complicated domain (that of human society) due to imperfect signal.

With the goal of general success in human society, we’ll begin (as many have) with the stock market. What rate of return might a superintelligence be able to achieve in the market, given only publicly available information? For simplicity, we’ll ignore high frequency / arbitrage opportunities (as we could expect a superintelligence to do well in this simpler domain). Framed differently, to what degree is the signal in the publicly available information (e.g. historical prices, company reports, news, etc.) sufficient to determine the way the actions of millions of market participants (including the superintelligence itself) will come together (and interact with each other) to drive prices? For example, Tesla stock went up nearly 10x in 2020 – could that have been determined with the signal available at the beginning of 2020? While many people identified this trend (or at least a few loud people), this identification was generally not repeatable, and appears to have been driven as much by luck as by analysis. To make a more concrete calculation, our superintelligence would need to understand both the distribution of investor (and consumer) mindsets toward Tesla and the way in which those mindsets would change as the stock price changed. The chaotic nature of this type of environment (where small differences in initial values can quickly balloon into large differences in outcomes) means that quite a robust understanding would be required for accurate prediction. The available signal does not appear sufficient for this depth of understanding – generalities could certainly be gleaned, but the chaos resulting from complex interactions between individual market participants would likely soon overtake them. It seems that even a superintelligence would have a substantially imperfect understanding of the stock market, a limitation which would likely hold for business and politics as well. There’s simply not enough signal available to parse out the complexities of individual behavioral drivers. There will certainly be statistical trends that may be of some use, but the underlying chaotic nature of these domains (together with the limited signal) greatly dampens the prediction accuracy of even a superintelligence.

Another area of interest is our superintelligence’s ability to predict (and influence) the behavior of a specific individual while interacting with them (and receiving a correspondingly much richer signal). Is seeing and hearing a person sufficient for a complete understanding of how they will act? We humans do quite well in this arena and are only infrequently surprised by the behavior of others, although we generally aren’t making exact predictions. We’re even able to influence the actions of others – for example, salespeople are particularly adept at this, knowing the right questions to ask and tone to use to ensure (or at least maximize the probability of) a sale. We do all this using generalities; we know how people work in general (particularly because we know how we work), and use that high-level understanding as a guide for prediction and influence. Can our superintelligence get more granular than that? To what degree is it possible to understand the deeper workings of someone’s mind based only on seeing and hearing them? It seems we run into the same issue of chaotic evolution here, as well – even if the superintelligence could parse out the general concepts churning in someone’s mind by seeing and hearing them, the underlying neural structure (with the billions of neurons and trillions of synapses, actively communicating with myriad action potentials) would remain off-limits, rendering incomplete the understanding of behavior. Again, there’s simply not enough signal for the superintelligence to “latch onto”. 

To reiterate, the two examples just reviewed say nothing about the limits of intelligence. There may be possible thinking systems which can quickly uncover the deepest laws of physics, and use those to manipulate the universe as the systems see fit (as just one example). We humans are certainly not the limit of intelligence, and there are risks associated with constructing powerful thinking machines that surpass our capabilities. However, what the two examples above do show is that even a superintelligence is limited by the available signal (and in the domain of human society, the signal is certainly limited). Omniscience is not a property of thinking systems, particularly when dealing with domains they have not constructed. Applying this idea to an often discussed example, a superintelligence in a box will, in all likelihood, remain in the box, if that’s what the keepers of the box desire. Signal, rather than intelligence, is the limiting reagent, at least in the domain of human society.

5 1 vote
Article Rating
Subscribe
Notify of
1 Comment
Inline Feedbacks
View all comments

[…] this view of intelligence is simplified (in reality, it varies across many axes and doesn’t have arbitrarily high limits), the range presented seems directionally accurate (though chimp, bird, and ant intelligence likely […]