# Relativistic Indeterminacy and the Holographic Universe: Thoughts on the Limits of Information

I work with computers for a living and I like to think about theoretical physics in my spare time. The area which has most often occupied my mind in recent years is relativity. I recently completed a preliminary draft of a paper on relativity which sums up my experience with it over the last thirty years or so, my focus being on the nature of time. In the process of putting the finishing touches on that work, I came across some books by Lee Smolin (*The Problem With Physics* and *Time Reborn*) which excited me tremendously for several reasons:

- Smolin’s viewpoint on the kind of human diversity that makes physics healthy made me feel very good about my role outside academia.
- His questions and observations about the nature of time gave me the feeling – rightly or wrongly – that I possessed uncommon insights in an important and current area of theoretical physics (some of which I will detail here; others in the above-mentioned essay).
- His comments about the absence of uncertainty in general relativity and his allusions to particular areas of ongoing research reminded me very strongly of a computer program I once tried to write.

It is this third item that I will discuss presently.

To get a clear picture of what a theory predicts, it is often useful to do an imaginary experiment that sets up the very simplest of test conditions and to apply the theory in that context. When the necessary calculations are repetitive, we may choose to do this with the aid of computers; in so doing, we unfortunately present ourselves with a multitude of new ways to botch the experiment, but we do free ourselves from the labor of repeated calculation. The experiment thus takes place in a virtual world, and we hope it will approximate the results we would find in the real world.

I wanted to try this with what I thought would be a very simple experiment. On a Cartesian grid, I would set up two imaginary, freely falling bodies with known initial conditions for mass, position, and velocity; and then see how they interact gravitationally. Having at the time no grasp at all on general relativity, I instead chose the classical force law for gravity which is given to us by Newton. But I wanted to at least be sophisticated enough to use the speed of light as a limiting condition on how the bodies interacted. Rather than implementing gravity as an instantaneous “action at a distance” causing each of the bodies to be attracted to the position the other *was in*, I decided that they would be attracted to the position the other *had been in*. In other words, information about each body’s mass, position, and velocity would be carried to the other at the speed of light, putting my experiment at least partially in accordance with the theory of relativity.

After some consideration, I decided that the problem need not be a hard one. There is well-known geometric device for thinking about the limits of relativistic cause and effect. This device is called the light cone. To determine the immediate force on Body A, I only needed to figure out which event in its past light cone was occupied by Body B. Once I had that event, I could take the spatial distance between that and the event corresponding to Body A’s immediate event, plug it into Newton’s gravitational force law, and calculate the force. To determine the immediate force on Body B, I would do the same calculations with regard to Body A. This algorithm was to be repeated over and over at intervals of time to create an approximation of the paths that the bodies would follow if they were acted on continuously. For each iteration, I would increase the time by a value of one “tick” of my imaginary clock.

Once I tried to implement this solution, however, several problems became immediately obvious. Starting only with the initial conditions (at time zero), I had no idea where Body B intersected Body A’s past light cone. Since the two would already have been interacting gravitationally, putting them on curved paths, I could only extrapolate linearly backward from Body B’s zero-time position and velocity, and make it an explicit condition of the test that both A and B were released from uniform motion at time zero. For each successive calculation of the immediate force on Body A, my program would have to ask first whether Body B was close enough and whether enough time had yet passed for Body A to be affected by B’s having been released from rest at time zero. If not, my program could safely assume that B had been on a linear path matching its position and velocity at time zero and then make the force calculations accordingly.

The second problem concerned how to track all of the changes in velocity that the bodies experienced after being released from rest. It wasn’t enough just to track their current positions and velocities; At each iteration of my algorithm – for each tick of the clock – I had to be able to recall the position and velocity of Body B at the time it had crossed the past light cone of the event which Body A currently occupied. I wondered whether each point in my imaginary space could be made to receive information about the motions of these bodies and propagate them through the grid at the speed of light. I imagined that for each tick of the clock, I could take the information at one point and pass it to its nearest four neighbors. This however, presented its own problem: light propagates radially, spreading out in a circle, while nodes in a square grid can only interact in an inherently square fashion. I decided that my approach would be to store these positions and velocities in a database table during each iteration, starting from time zero, so that they could be recalled when needed.

The third problem was that I would be storing my data at integer values of time, yet I would almost certainly be looking for position and velocity measurements which had occurred at non-integer values of time. In other words, I could expect Body B to have intersected Body A’s light cone at times in-between ticks of the clock rather than always on them. I would have to estimate using the nearest neighboring measurements.

I assumed that real physicists must have a better approach to these types of problems and I wondered what it was. Shortly later I became distracted by other activities. What I failed to appreciate at the time was how the problems themselves were indicative of the true nature of our world.

Anyone having a passing familiarity with quantum physics knows that there is a degree of uncertainty inherent in the measurement of position and momentum of any mass, as taught by Werner Heisenberg. This uncertainty is most evident on a small scale, and this small scale is the usual context in which quantum physics is considered. The nature of this uncertainty is that the more accurately we measure a particle’s position, the less we can know about its momentum, and vice versa. Mathematically, we have the standard deviation of the position *s*, multiplied by the standard deviation of the momentum *p*, always being greater than or equal to a universal constant.

This uncertainty is the subject of one of the greatest debates in the history of physics, with Einstein’s “God does not play dice” comment and Erwin Schrödinger’s *reductio ad absurdum *“cat in a box” thought experiment on one side; and a logical positivist school of thought on the other side which says that if a particle’s position can only be *measured* to a particular limit of precision, it cannot truly be said to *have* an arbitrarily precise position.

What occurs to me recently is that there is likewise an amount of uncertainty – even on large scales – which arises from special relativity.

Rather than as virtual objects in a computer program, let us re-envision bodies A and B as enormous but quantified masses widely separated in space, perhaps at a distance of ten light-seconds. At time zero, we measure their positions and velocities (how you would do this with such widely-separated objects is beyond me, but for the sake of argument let us assume that we can). Knowing their mass, we can calculate their momentum from their measured velocity. With what degree of certainty can we predict what their positions and momentums will be one second later? Those quantities will be affected by their mutual gravitational attraction during that second. With what degree of precision can we calculate that attraction? We have their positions now, but how could these bodies react to each other’s present locations or actions if they are ten light-seconds apart? Mustn’t they each instead be attracted to the forces that result from their counterpart’s position ten seconds ago? Given that they have been interacting long before we arrived on the scene to take measurements, we can only make an educated guess as to their past positions, and thus there is an amount of uncertainty regarding the path they will follow and what the position and momentum of each will be one second from now. As we take repeated measurements, this uncertainty decreases. For every second that passes, we have that much less history of interaction to speculate about and can instead refer to our previous measurements.

One less-known consequence of Heisenberg’s uncertainty principle is that measurements of energy decrease in uncertainty over time. If we take liberties with the above uncertainty equation, we may grope toward a sense of why this may be so: if we multiply position with momentum, we get the same units of measure as energy multiplied by time. As Lawrence Krauss explains, “[T]he uncertainty principle tells us that the longer we measure something, the more accurately we can determine its total energy. Since all measurements take merely a finite time, however, there is always a residual uncertainty in the value of the energy that can be measured in any system.” (*Hiding in the Mirror*, p. 108)

This sounds suspiciously like our computer modeling problem; at time zero, we have no idea what gravitational energy is hidden in the system. It is revealed only over time as the presumable gravitons (or whatever) reach their targets. If we had instead chosen to model an electrodynamic system consisting of two charged particles, the problem would have been the more or less the same as our gravitational system, with some of the information in the system being tied up in photons rather than gravitons. In either case, the force-carrying particles contain “hidden variables.” Is it possible that relativity and quantum theory are thus different expressions of the same physical limits?

“Hidden variables” are exactly what are missing from quantum theory, giving us a world in which some processes are supposed to happen completely at random, in a sense without cause. Quantum theory demands indeterminacy, while relativity is seen by some (including Smolin) as giving us a four-dimensional, pre-determined “eternity.” Relativity is supposed to codify causality by setting a speed limit on any and all processes, while quantum theory appears to defy causality.

Smolin sees a complete absence of indeterminacy in relativity, denouncing the Newtonian “Expulsion of Novelty and Surprise” in an entire chapter by that name and presenting relativity as the final nail in the coffin of free will: “Given the initial conditions, Einstein’s equations determine the whole future geometry of a particular spacetime and everything it contains, including matter and radiation,” leaving “no role for or sign of our awareness of the present.” (*Time Reborn*, p. 71) Ironically, in the text preceding this, he seems to demonstrate precisely the opposite, showing in a very salient analysis that the “initial conditions” of a “subsystem” are necessarily at least partly outside the subsystem being considered: “We live in a world in which the flap of a butterfly’s wing can influence the weather oceans away and months later.” (p. 49) What else might he be looking for? Perhaps he is saying that the Big Bang provides a theoretical complete set of initial conditions from which one may extrapolate the subsequent history of the universe. I found the second half of *Time Reborn* very difficult to follow.

*Time Reborn* may be the most agitating book I have ever read, because I felt several times that I truly could not tell whether Smolin was setting the stage for an answer he is not yet sharing; whether I misunderstood the problem as he stated it; or whether – like Lorentz and Poincare – he had complete mastery of the problem, yet was just shy of seeing – as Einstein did – the revolutionary solution. Though much of what he *does* propose as a solution to the dilemma of time goes over my head, I do see that he is inclined to consider “a preferred state of rest” and “a preferred observer, whose clock measures [a] preferred time,” (p. 165-166) and for this reason I have to conclude that one of us is missing an important point. He has obviously given this matter a tremendous amount of thought and can easily run theoretical and mathematical circles around me, but his “parabolic” treatment of the relatively simple matter of elliptical non-escaping trajectories gives me great pause. He correctly sets up the problem of apparent difference between elliptical orbits and parabolic trajectories of falling bodies and shows the relationship of these two shapes as conic sections. But then, rather than showing how the elliptical trajectory merely *approaches* a parabolic shape at lower velocities (as it also approaches a straight-line drop), he leaves the reader with the conclusion that it is actually “parabolas [that] trace the paths of falling bodies on Earth,” (p. 21).

The references Smolin makes are tantalizingly suggestive of my ideas above. He mentions the study of cellular automata, a kind of study of cause and effect not unlike what I described attempting to do in my computer simulation above. “With only a little modification,” he writes, the study of cellular automata is “the basis for quantum mechanics.” (*Time Reborn*, p. 43) In the end notes, he again comes close to – but does not touch – the relativistic two-body problem: “Consider a system of stars moving under their mutual gravitational influence. The interaction of two stars can be described exactly; Newton solved that problem. But there is no exact solution to the problem of describing the gravitational interaction of three stars.”

In the above discussion of my computer model, we saw that the initial conditions in a finite “span” of spacetime are such that everywhere in it, there are light waves carrying information about times preceding the “zero” time. The future state of the system is *indeterminate* without that hidden information. Inasmuch as every span of spacetime contains information about other spans, it may be seen as analogous to a hologram. Suppose we have created a hologram showing an image of a bird. One of the remarkable properties of a hologram – related to the fact that it allows three-dimensional representations to be made on a two-dimensional surface – is that there is no one-to-one correspondence between the parts of our bird and the points on the surface of our hologram. Each part of the hologram contains information about many parts of the bird.

Holography has special significance in cosmology due to the “holographic principle” developed by Gerard ‘t Hooft. I will not pretend to understand it, but as I will show below, it does seem to have more connection to my computer modeling problem than just what its name may suggest. Wikipedia summarizes the principle thus:

The holographic principle states that the entropy of

ordinary mass(not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information “inscribed” on the surface of its boundary. http://en.wikipedia.org/wiki/Holographic_principle

Smolin writes of a “similar idea” proposed by Louis Crane, who suggested that

[Q]uantum mechanics is not a static description of a system but a record of information that one subsystem of the universe can have about another by virtue of their interaction. He then suggested that there is a quantum-mechanical description connected with every way of dividing the universe into two parts. The quantum states live not in one part or the other but on the boundary between them.

Crane’s radical suggestion has since grown into a class of approaches quantum theory that are called

relational quantum theories, because they are based on the idea that quantum mechanics is a description of relations between subsystems of the universe. This idea was developed by Carlo Rovelli, who showed it to be perfectly consistent with how we usually do quantum theory. In the context of quantum gravity . . . [Fotini] Markapoulou emphasized that describing the exchange of information between different subsystems is the same as describing the causal structure that limits which systems can influence each other. She thus found that a universe can be described as a quantum computer, with a dynamically generated logic. (The Trouble With Physics, 317-318)

Did you catch that? Relativity is the causal structure which places speed limits on the exchange of information from system to system, and quantum theory is about describing those systems’ interactions; they seem to be different approaches to the same thing. The universe behaves like a relativistic quantum computer.

I think I am starting to see why Smolin would call Crane’s concept similar to ‘t Hooft’s. One is talking about the surface area surrounding a volume and the other is talking about how information crosses the (surface) boundary between one volume and another. Fascinating to think about. Watch for a post soon on the closely related topic of entropy and its relationship to gravity.