Computational Limits and Simulation Argument

1) Computational Demands of Simulating Reality

The simulation hypothesis, a thought-provoking concept, posits that we might be living within a sophisticated computer simulation. A key question arises: could such a feature even be technically possible?
The sheer computational power required to emulate a human mind is mind-boggling. Researchers have proposed estimates ranging from a modest 10^14 operations per second, based on replicating a simple neural function like contrast enhancement in the retina, to a staggering 10^16-10^17 operations per second, considering the sheer number of synapses firing in the human brain. These figures, however, may be conservative. Simulating the intricate details of synaptic interactions and dendritic trees could demand even more processing power. Fortunately, there’s reason for optimism. The human brain likely employs redundancy at the microscopic level to compensate for the inherent unreliability of individual neurons. This suggests that more efficient, non-biological processors could achieve similar results with significantly less computational overhead.

The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain. An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second. Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly, 53.211 (2003): 243-255.

The true challenge lies in simulating not just individual minds, but the entire tapestry of human history – a task of unimaginable scale. Researchers estimate that simulating human history with sufficient fidelity could require a staggering 10^33 to 10^36 operations. While this seems astronomical, the potential for such a feat may lie in the realm of advanced, future technologies. The concept of a “planetary-mass computer,” a hypothetical machine utilizing the entire mass of a planet for computational purposes, offers a glimpse into the potential scale of future computing power. Even with conservative estimates of nanotechnological capabilities, such a computer could theoretically simulate the entire history of humankind with a minuscule fraction of its processing power.

It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level. While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 - 10^36 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.

Bostrom, N. (2003). Are you living in a computer simulation?. Philosophical Quarterly, 53(211), 243-255.

I was inspired by Nick Bostrom’s exploration of the computational requirements for simulating a reality like ours. By delving into the staggering figures he presents, we sought to answer a fundamental question: Where do we stand in terms of our computational capabilities, and what are the potential limits to what we can achieve?

2) Where are we?

Supercomputing

The quest to simulate complex systems like the human brain or entire universes demands immense computational power. While we’re still far from achieving the scale necessary for such grand simulations, advancements in supercomputing are steadily narrowing the gap.

One notable example is the Aurora supercomputer at Argonne National Laboratory. This machine has achieved a sustained performance of 1.012 exaFLOPS, meaning it can perform over a quadrillion floating-point operations per second. To put this into perspective, that’s equivalent to roughly 10^18 FLOPS. The theoretical peak performance of Aurora is even more impressive, reaching nearly 2×10^18 FLOPS.

GPU

While supercomputers provide immense raw computing power, GPUs have emerged as a dominant force in accelerating specific tasks, particularly in the realm of artificial intelligence. NVIDIA’s latest RTX 50 series showcases the remarkable strides made in GPU technology. These GPUs are capable of performing 4000 AI TOPS (trillions of operations per second), representing a substantial leap over previous generations.

Key features of the RTX 50 series include:

ASICs

Beyond general-purpose processors and GPUs, specialized chips known as Application-Specific Integrated Circuits (ASICs) are emerging as crucial players in accelerating simulations. Tailored to specific tasks, ASICs offer unparalleled performance and energy efficiency. AI accelerators, a prominent example, are designed to optimize deep learning computations. Recent advancements in AI accelerators have achieved remarkable milestones, with some chips exceeding 100 TeraFLOPS of AI performance. These chips are driving breakthroughs in natural language processing, computer vision, and other AI-driven applications.

Furthermore, the rise of neuromorphic chips, inspired by the human brain’s neural architecture, is opening new avenues for simulation. These chips, with their inherent parallelism and low-power operation, are particularly well-suited for simulating biological systems and developing more energy-efficient AI algorithms. While still in their early stages, neuromorphic chips hold immense promise for revolutionizing fields such as neuroscience, robotics, and AI.

3) Limitations

While the computational power demonstrated by supercomputers, GPUs, and specialized ASICs is impressive, it’s essential to acknowledge the significant limitations that still exist. The vast computational resources required to simulate complex systems like the human brain or the universe are far beyond our current capabilities.

The Insufficiency of Current Estimates

The figures presented in “Where are we?”, while staggering, merely scratch the surface of the computational demands involved in creating truly comprehensive simulations. These estimates are often based on simplified models of neural networks or physical systems, and they may not accurately reflect the complexity of real-world phenomena. For instance, the human brain is a highly dynamic and interconnected system that is not fully understood, making it difficult to precisely quantify the computational requirements for simulating its functions.

Fundamental Limits of Computation

Even if we could overcome the engineering challenges of building more powerful computers, there are fundamental limits to computation that may constrain our ability to simulate complex systems. These limits include:

· simulation, computational limits, simulation argument, computer science, artificial intelligence, quantum computing, philosophy