The Peer-to-Peer (P2P) Simulation Hypothesis
What is the Peer-to-Peer Simulation Hypothesis?
The idea that we may be living inside of a computer simulation is not new. It was famously depicted in the 1998 film The Matrix and argued to be probable by philosopher Nick Bostrom. However, most forms of the "simulation hypothesis" (including Bostrom's) are purely speculative. They do not explain or make predictions about the world we perceive.
A new version of the simulation hypothesis--the Peer-to-Peer (P2P) Simulation Hypothesis--is not purely speculative. It:
-
Is based on serious scientific and philosophical hypothesies.
-
Explains features of the physical world (including quantum world) that no other theory explains.
-
Makes predictions about our world, and so may be confirmed or falsified.
The P2P hypothesis holds that we are living in a peer-to-peer networked computer simulation. Some computer simulations have a "dedicated" centeral server (a single computer running the simulation that all other computers access). However, peer-to-peer networked simulations have no central server. The "simulated reality" is simply a vast network of different computers (a "cloud") running the simulation in parallel--as illustrated in the following diagrams:
What the P2P Hypothesis Explains: Quantum Mechanics & Relativity
Because each computer on a peer-to-peer network runs its own simulation, the computational structure of a peer-to-peer simulation inherently has all of the following properties:
-
The location of any "object" within the simulation is a computational superposition, i.e. an object represented at position A on computer A, position B on computer B, position C on computer C, etc. will be coded, at the level of the whole simulation, as being simultaneously in positions A, B, C, etc. (superimposed in all of those locations at once).
-
"The" location of any object or property in a P2P simulation is therefore also indeterminate, given that each computer on the network has its own representation of where "the" object or property is, and there is no dedicated server on the network to represent where the object or property "really" is (any object or property "really" is represented at many different positions on the network, thanks to slightly different representations on many computers all operating in parallel),
-
Any measurement taken by any single measurement device a P2P network also thereby affects the network as a whole (since what one computer measures will affect what other computers on the network are likely to measure at any given instant), giving rise to a massive measurement problem (one can only measure an object is on the network by disturbing the entire network, thereby altering where other computers on the network will represent the particle as being).
-
Because different machines on the network represent the same object in slightly different positions at any given instant (with some number n of machines representing a given object at position P, some other number n* of machines representing a given object at position P*, etc.) a dynamical description of where a given object/property probably is in the environment will have features of a wave (viz. an amplitude equivalent to the number of computers representing the object at a given instant, and wavelength equivalent to dynamical change of how many computers represent the object at a given point at the next instant).
-
By a similar token, any particular measurement on any particular computer will result in the observation of the object as located at a specific point.
-
Any particular measurement on any particular computer will result in the appearance of a “collapse” of wave-like dynamics of the simulation into a single, determinate measurement.
-
It is also a natural result of a peer-to-peer network that single objects can “split in two”, becoming entangled (in a peer-to-peer network multiple computers can, in a manner of speaking, get slightly out of phase, with one or more computers on the network coding for the particle passing through a boundary, while one or more other computers on the network coding for the particle to bounce backwards – in which case, if the coding is right, all of the computers on the network will treat the “two” resulting objects as simply later continuants of what was previously a single object).
-
All time measurements in a P2P simulation are relative to observers. Each measurement device on a P2P simulation (i.e. game console) has its own internal clock, and there is no universal clock or standard of time that all machines share.
-
Because the quantized data comprising the physical information of a P2P simulation will have to be separated/non-continuous much as there are "spaces" between pits of data on a CD/DVD/Blu-Ray disc (see image below), there must be within any such simulation something akin to the Planck length, an absolute minimum length below which measurements of space-time cannot be taken in principle (a feature of our world for which, at present, "there is no proven physical significance").
Notice that the nine properties listed above just are observed features of quantum mechanics and relativity in our world. The Peer-to-Peer Simulation Hypothesis explains features of our world that otherwise have no known explanation. Physicists, to this very date, do not have any deep theory of why our world is quantum mechanical or relativistic. The equations of quantum mechanics and relativity merely reflect the fact that our world has these strange features. The Peer-to-Peer Simulation hypothesis provides the first unified explanation of why our world quantum mechanical and relativistic. It shows that "quantum mechanics" and "relativity" emerge naturally and inevitably from the purely computational structure of a peer-to-peer simulation.
What Else the P2P Hypothesis Explains: Philosophical Puzzles
Our reality also has a number of philosophically baffling features. Among them are:
-
The mind-body problem
-
The problem of personal identity
-
The problem of time's passage
-
The problem of free will
The P2P model provides what I take to be a unified explanations of these problems as well, while also providing possible new resolutions to them:
Explaining the mind-body problem: It's a curious fact that it seems to many of us that no matter how complete a physical explanation might be, such an explanation could never possibly account for consciousness (i.e. the "soul"). The P2P hypothesis predicts and explains this problem. Observers trapped in a P2P simulation would be convinced--just as many of us are--that there is something about their subjective point-of-view that cannot be captured in the physics of their world. And they would be right. The hardware upon which the simulation is running--the processing apparatus (viz. DVD laser apparatus/processor)--would comprise their subjective point-of-view, and be inaccessible to them within the simulation. More generally, the P2P model holds a reality like ours is comprised by two fundamentally different types of things: (A) "hardware" (i.e. consciousness/measurement apparatus), and (B) "software" (i.e. physical information) interacting.
Explaining the problem of personal identity: Many of us are tempted to say that our personal identity consists in something over and above any biological or psychological facts about us--that our survival over time is neither a matter of our body surviving nor a matter of our personality surviving, but rather a matter of our "soul" surviving. The P2P Simulation Hypothesis explains this as well. It holds our identities are comprised by hardware in a higher reference-frame reading the software of the simulation we are in. We survive over time because we are, in fact, neither our "body" nor our "psychology" (which are both software). We are the hardware that reads the software that comprises our (digital) bodies and psychologies.
Explaining problems with time: There are broadly two theories of time in philosophy, the "A-theory" which says that time passes (viz. a "moving spotlight"), and the "B-theory" which says that time is nothing more than an ordered series of events (viz. time just is some events ordered before/after others). Both theories seem to face problems. A-theories seem hopelessly mysterious. B-theories seem to face problems making sense of change (i.e. if an ordered series of events is all that time is, how does time pass?). The P2P Hypothesis provides a new answer: one that synthesizes both positions via a kind of mechanism/model that we already understand. When I go to play back a CD, the CD is a series of ordered information, and that information is experienced in real-time moving forward only insofar as a distinct observation-mechanism (the CD-player's processor) reads the information. This suggests that in order to make sense of time (i.e. it's being ordered and moving), we need a dualist theory--and the P2P Hypothesis gives us a concrete example of how such a dualist theory works.
Explaining (and solving?) the problem of free will: Einstein taught us that the way things appear from one reference-frame may appear the opposite from another reference-frame. Here's a simple example. If you were moving at a uniform speed within an enclosed elevator falling at an extremely fast velocity (say, 100,000 kilometers/hr, you would have no idea you are moving. You would think the elevator was still because the elevator is not accelerating relative to you. A person outside the elevator, however, would see you moving at an immense speed relative to them. Now consider the problem of free will. From our perspective within our world, all of our actions appear to be determined by the laws of physics. This, obviously, is the issue that gives rise to classical problems of free will (viz. how can we be free if all of our actions are determined by physical law?).
P2P simulations and other online videogames show how (A) the appearance of determinism or causal closure within a simulation can actually be an illusion of sorts generated by (B) causal interaction in a higher-reference frame not determined by any law of physics within the simulation. Allow me to explain.
Anyone who has played an online simulation knows that once one finishes playing a game, one can rewind the game back to the beginning, press the “play” button, and watch the game that just completed inexorably play out just as it did the first time. Accordingly, although the events that played out inside the simulation were the result of inputs from us from the outside, to any observer trapped within the simulation it would have to appear to them that everything in their world is determined by physical law. From their perspective, their laws of physics would appear to be "inexorable." They would think, for instance, that if their world were rewinded back to its beginning, it would have to deterministically evolve just as it did (with each of their actions being determined by its initial state and physical laws!).
To make a long story short, the model shows how libertarian free will in a higher-reference frame (i.e. free will not determined by any physical law within a simulation) can generate the appearance of determinism in a lower reference-frame. Libertarian free will, in other words, is compatible with determinism--provided we distinguish between reference-frames.
Want to Learn More About the Peer-to-Peer Simulation Hypothesis?
For more on the P2P Hypothesis, read:
-
Marcus Arvan (2013). A New Theory of Free Will. Philosophical Forum 44 (1):1-48.
-
Marcus Arvan (2014). A Unified Explanation of Quantum Phenomena? The Case for the Peer‐to‐Peer Simulation Hypothesis as an Interdisciplinary Research Program. Philosophical Forum 45 (4):433-446.
-
Marcus Arvan (unpublished manuscript). The P2P Simulation Hypothesis and Meta-Problem of Everything.
For additional work related to the P2P Hypothesis, see:
-
Mesh World P2P Simulation Hypothesis (2016) - by Eric Grange (Development Lead & Chief Architect, Creative IT).
-
'Features, not bugs' - at roadtovr.com