Exploring Simulation Theory

While the question of whether we live in a simulation is a relatively new one, it’s really just a modernized version of a more persistent question throughout humanity’s past – why are there things, and why are we part of those things? In the early days of humanity, there were few tools or methods available to understand the world. There was limited history to lean on, resulting in limited cultural knowledge (things like fire, the wheel, agriculture, etc. were yet to be discovered) – man’s early understanding of the world would have been mostly based on the regularities he was able to observe, together with the cultural knowledge of the time (such as which berries not to eat, where to find water, when different seasons occur, etc.). This level of understanding may not have been too different from the other apes, who can also identify relevant regularities within their own environment. The major difference, however, was the ability of humans to ask questions – an ability that would have evolved with and been aided by the adaptation of language. These early humans would have been the first organisms on the planet to realize that their knowledge base was incomplete, and that other humans had knowledge bases which were more complete, to whom they could ask questions. A major part of what first distinguished homo sapiens from the other apes was this ability to recognize knowledge gaps – and it can be viewed as a drastic step upward in consciousness, as it requires an individual to have an awareness of the goings-on of their own mind. This ability to ask questions, together with furthered understanding of object permanence, would have quickly led to humans asking questions about their own origins, the origins of the world, and, eventually, the origins of existence. (side note – seems like this curiosity could be an evolved trait, as genes causing an innate drive to recognize and correct inconsistencies in a knowledge base would have been beneficial…also there’s also a significant cultural component)

Early questions of origins would have been joined by similarly difficult questions about the nature of the world – for a significant portion of the early world (weather phenomena, tides, diseases, death, etc.) would have been a mystery to early humans. It’s hard to fathom the radically different worldview humans would have had in those days – these days, our unknown questions lie at the level of particle physics (hugely removed from the domain of most humans), while for these early humans the unknown encompassed all parts of their life. It fell to particular humans in the group (the shamans, etc.) to craft answers to these unknowns, and the answers would have been refined as the ideas were passed down over generations and modified to better fit observed phenomena. Even in the time of these shamans, humans would have seemed to be the most powerful / creative animal on the planet, and so the shamans took the path of least resistance with regards to their answers – ascribing the effects to anthropomorphized gods. This idea (or “meme”, in the Dawkins sense) had incredible “reproductive” power, as it satisfied all gaps in the knowledge base (or at least had the capability to, with minor adjustments such as new gods), and in many cases (especially in more recent history) came with an obligation to spread the message to others (much clarity is added to this aspect of religion by viewing it through the lens of a Dawkins “meme” – the religions which propagated themselves would have been the ones to survive and prosper). For a time, the origin questions were answered. 

However, these answers were not to maintain their power forever. Over the course of human history, the knowledge base continued to expand, and to cut down the number of phenomena which required an answer “outside the system”. These developments have been especially rapid over the last ~500 years, and have progressed to the point where all earthly (universly) phenomena are now better explained with observed scientific regularities, leaving the answer of gods only the questions of origin and afterlife. The question of afterlife arose with the answer of gods, and therefore has a simple answer from a scientific perspective (where an “I” is necessarily the same as the processes of a living brain – there are none of these processes in a dead brain), but the question of origins arises in a scientific framework as well, and continues to beg an adequate answer. It is this question to which the potential answer of a simulation applies.

Over the last ~80 years, the development of computers has driven a paradigm shift in our understanding of complexity and emergent properties. To understand how monumental this shift has been, it’s helpful to take a step back and picture the world view of an educated person in the early 1900’s. They, much like us today, would have felt like the world was yielding to a scientific interpretation. Maxwell’s equations and Darwin’s Origin of Species had been published ~50 years prior, leaving plenty of time for review and incorporation into the paradigms of thinking. Rutherford and Bohr were in the midst of providing clarity as to the essence and mechanisms of atomic particles. Santiago Ramón y Cajal had just leveraged Golgi’s staining methods to develop the neuron doctrine, the beginning of understanding how brains function. Industry was growing rapidly, and it would have felt like man, more than ever, was complete master over his world. Following from this mastery, it would have felt like there was something special about man and mind – some unique attribute that alloted us such capacity to understand and interact with the world. Even with Cajal’s neuron doctrine, the idea of creating something as complex as the human mind using some arrangement of smaller, “un-thinking” parts would have seemed absurd. Following from this fact, any understanding of origins sans gods and supernatural powers would have made equally little sense to the humans of that age. Then, in 1936, Alan Turing hit upon his idea of a theoretical “Turing Machine”, which could run any formally specified program with only a moveable tape and a simple set of update rules. This method of thinking started to reveal the power of relatively simple rules to create complex environments and do complex tasks. This idea has been growing since, with some key contributors being Conway’s Game of Life (showing how a simulation with extremely simple rules can give rise to extraordinarily complex behaviors), Deep Blue (revealing the capability of programs to “do more” than their creators), and Open Worm (the first step in the path of simulating brains and functional actions). The Game of Life has been particularly impactful – as it suggests that our reality could be viewed from a similar type of starting point, but just with a much larger grid and significantly more complex update rules (including some level of uncertainty identified by quantum mechanics). With the rapidly increasing power of computers, this question has gained legitimacy – it seems that we might one day have the power to simulate mini-universes in detail, so how can we exclude the possibility of some larger universe simulating us?

The simulation hypothesis posits that if any civilizations can create universe-like simulations (and choose to), there will be many of them (due to the ease of creating simulations / the possibility for simulations inside simulations), and therefore if this is possible there’s a very significant chance we’re living in one. The best way of exploring the simulation hypothesis seems to be analyzing what it would look like for us to simulate another world (although this provides no visibility into the case where our universe is a simulation, but lacks the computational power necessary to craft additional simulations below. This view varies from the traditional belief in god(s) only in that it ascribes a familiar mechanism to the method of creation – it’s still turtles all the way down). For this purpose, let’s assume we’ve been able to build an idealized computer (the exact size isn’t as important, it’s more about where the limits of our physical universe lie). There seem to be two (or three) ways we could look to construct this simulation, to make the experiences of its residents comparable to ours:

  • Simulate the universe in totality from its beginnings, starting with a particular set of laws of physics and particular arrangement of matter, and then letting those update rules (the laws of physics) exert themselves on each particle for the lifetime of the simulated universe. This would create a simulated universe similar to the universe we live in (regardless of whether ours is real or simulated). As an analogy, think of a semi-infinite board of the Game of Life, with a random set of squares turned on to start (the random squares view is different than the concentrated matter associated with a big bang, but more appropriately fits the analogy).
  • Craft a focused simulation targeting a specific world (or worlds). This simulation could start at any point in the planet (or systems) history, and would only need to simulate in full detail the particles within a specified range. Everything outside that range would be handled separately, but would be designed to function in accordance with the update rules (i.e. distant stars would be simulated just as a certain type of input stream into the simulated universe, with care taken to ensure consistency with the laws of physics governing the simulated world. Back to the Game of Life analogy, this would be comparable to a fixed size board, with the initial structures chosen specifically to create certain behavior – and with interactions at the edges of the board specified to emulate a vast expanse of board beyond (with this emulation being substantially cheaper from a computational perspective due to its graininess.))
    • A possible subset of option 2 would be a simulation targeting specific individuals / minds, with the world around these minds being treated as “outside the range” in this scenario. The advantage (from a computational perspective) here would be that non-focus objects could be simulated at the level of perception of the focus individuals. For example, assuming humans to be the focus individuals, the objects we experience (i.e. cars, telephones, the ocean, etc.) could be simulated at the level of our perception (rather than at the subatomic level). The simulation would need to be set up to ensure proper handling of cases when humans leverage tools to perceive at a lower level (e.g. using electron microscopes to view subatomic particles), but even so this would represent a significant decrease in the computational load.

These ideas progress from most to least in terms of computational load, with option 1 requiring simulation of all particles of the entire space over all time, 2 requiring simulation of all particles within a portion of space over some time, and 2a requiring simulation of particular complex patterns (minds), together with a higher level environment simulation, over some far smaller amount of time. 

Option 1 is interesting in that it is unclear whether it deserves the label of “simulation” – as the only feasible way to construct it would be to essentially “create” each individual particle. To extend the Game of Life analogy – if you were to have a semi-infinite board, the only effective means of computation in a reasonable time-frame would be to have a separate processor for each space, allowing for fully parallel computation. This processor would be programmed to do the simple task of updating the square based on the update rules. There’s no intrinsic difference between a square “actually” exhibiting that type of update behavior and a square doing so by virtue of a processor driving the updates, as the “processor + square” system can be viewed as just as “real” as a square exhibiting that type of update behavior due to its natural properties. This type of reality would only differ from a “base” reality in that the simulators could decide to turn the power off one day – but there would be no means of distinguishing it from a “base” reality from within. This is because the completely parallel processing nature of the simulation ensures our inability to ever cause “glitches”, as the computational load will be similar across all states (each particle updates based on its immediate environment and the update rules). However, the parallel processing nature also limits the potential of this type of simulation to extend very far, as the amount of computational resources required ensures that the size of the simulated universe will be dramatically smaller than the simulating universe. Taking our idealized processor, even if transistors are the size of an atom, it would still take at minimum one atom to simulate each elementary particle (i.e. fermions, bosons, etc.) for a simulated universe, and thus it would take something like a solar system of computational resources to accurately simulate a planet sized universe (directionally). If limited to this type of simulation, a far more practical approach seems to be simply arranging the matter in the desired format within our own universe, and letting the system function according to the update rules of our own universe. Based on these limitations, it seems as though any type of tiered simulation of universes would require simulations like option 2.

Option 2 looks to be the more likely choice in a scenario with many simulations, and also holds true to the idea of a simulation being more “fake” compared to what we believe about our universe. Interestingly, in this case we (i.e. intelligent beings, not necessarily humans with fleshy bodies) may actually have the means to discover the nature of the simulation – especially if it’s of the type where everything is simulated at the level of our minds. We can imagine intelligent life expanding far throughout the universe, and eventually reaching some sort of barrier, or even some sort of area where the local laws of physics begin to change. In the case where the simulation occurs at the level of minds, there are even more opportunities to discover the true nature of our world – such as by adding significant complexity to the computational requirements of the simulation through en masse observation of subatomic particles. It’s clear that these types of simulations are the ones we should think about with the Simulation Hypothesis – and we’ll dive further into potential means of understanding the nature of our world in the conclusion.

So do we live in a simulation, crafted by some otherworldly society outside our sphere of knowledge? It’s easy to say that it doesn’t matter, as we can’t figure it out, and that we should just focus on the observable universe that we know. However, it is an interesting question (and as covered earlier, humans have an innate curiosity!), so it’s worth thinking about ways to potentially determine the essence of the universe we inhabit. At the very least, we can rule out certain types of simulations as the nature of our universe – and at best, we can make headway on the seemingly impenetrable problem of why anything exists. 

The best way to frame up the problem seems to be by thinking about how an actor within a computer program made by a human (and running on human computers, however large) could determine the essence of its existence, with the simulation of the “type 2” nature described above. As shown in the previous examples, it seems the most effective strategy would be to try and create a situation that the underlying processor cannot keep up with, or where shortcuts would need to be taken. These shortcuts could involve anything from simplifying certain regions (e.g. deep space) or hierarchical levels (e.g. subatomic particles) of the universe, limiting the computational requirements of objects not under intelligent observation, or conducting the simulation at a drastically different rate of time (which we’d have no way of figuring out, but this would make the simulation significantly less interesting for the simulating culture). If you can identify the usage of any of these shortcuts you have some evidence of a universe crafted by another intelligent being running on some sort of hardware. E likely won’t be able to make much headway on this problem in our current state, limited by 100 year lifespans and soft, complex bodies that don’t travel well – but as human and superhuman artificial intelligence come into power, they’ll have the opportunity to spread through the universe and test out some of these ideas.

Author’s Note: If you enjoyed this post, please consider subscribing to be notified of new posts 🙂

Loading
0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments