P.C.W. Davies
Abstract
A major focus of current research in theoretical physics is the formulation of a final unified
theory in which all the known laws would be amalgamated into a single compact mathematical
scheme. A popular contender for this complete unification is string/M theory; another is loop
quantum gravity. These developments are forcing physicists to confront the nature of physical
law: what are such laws, where do they come from and why do they have the form that they do?
Theorists are sharply split over whether a final theory would be unique, and so describe only one
possible world, or whether the observed universe is but one “solution” of a multiplicity of
possible worlds, and if so, whether our universe is an infinitesimal component in a vast and
variegated multiverse. Central to the issues involved is the fact that the laws of physics in our
universe seem uncannily suited to the emergence of life. Indeed, some commentators believe the
bio-friendliness of the universe has the air of a fine-tuned big fix, and cries out for explanation. A
fashionable idea is that the “unreasonable” fitness of the universe for life is the result of an
observer selection effect. Only in universes which by accident possess appropriate laws and
conditions will life arise and observers exist to ponder the significance of the cosmic fine-tuning.
Universes that are less propitiously endowed will be sterile, and so go unobserved. In this chapter
I critically assess the strengths and weaknesses of both the unique universe and multiverse
proposals, and argue that both fall short of providing an ultimate explanation for physical
existence.
1. Background
The laws of physics stand at the very heart of our scientific picture of the world. Indeed, the
entire scientific enterprise is founded on the belief that the universe is ordered in an intelligible
way, and this order is given its most refined expression in the laws of physics, which are widely
assumed to underpin all natural law. Physicists may disagree about the likely final form of the
laws of physics, or which of the known laws are fundamental and which are secondary, but the
existence of some set of laws is taken for granted. In this essay, I would like to put the laws of
physics under the spotlight, and ask some challenging questions, such as: Why are there laws of
physics? Where do they come from? Why do they have the form that they do? Could they have
been otherwise, and if so, is there anything special about the form we observe? Physicists do not
usually ask these sorts of questions. The job of the physicist is to accept the laws of physics as
“given” and get on with the task of working out their consequences. Questions about “why those
laws” traditionally belong to metaphysics. Yet they have been thrown into sharp relief by two
recent developments in physical theory. The first is the growing interest in the unification of
physics, in programs such as string/M theory (see, for example, Greene, 2000) and loop quantum
gravity (Smolin, 2002) which seek to amalgamate all physical laws within a single scheme. The
second is the realization that what we have all along been calling “the universe” might in reality
be just an infinitesimal component in a vast mosaic of universes with different laws, popularly
comment.
*They are mathematical in form, a quality famously expressed in Galileo’s comment that
“the great book of nature is written in the language of mathematics,” and propelled to
prominence by Eugene Wigner in his 1960 essay “On the unreasonable efficiency of
*They are bio-friendly, that is, they permit the emergence of life, and thereby observers
(e.g. human beings) in the universe (see, for example, Susskind, 2005; Davies, 2006). It is
easy to imagine universes with laws that are inconsistent with life, at least as we know it.
*They are comprehensible, at least in part. Again, it is easy to imagine universes with
laws so complicated or subtle that they lie beyond our grasp, or universes that have no
In what follows, I shall ask whether these properties can be explained, and if so, what sort of
When is a law not a law? Answer: when it is a frozen accident. Many features of the physical
universe that were once considered to be the product of a fundamental law eventually turn out to
be the product of historical happenstance. For example, Bode’s so-called law of planetary orbits
(Bode, 1772), which fitted the distances of the planets from the sun to a simple numerical
formula, turns out to be just a curious coincidence. The sizes and shapes of planetary orbits do
not conform to a fundamental law of nature, but are in large measure an accident of fate,
determined by complicated features of the proto-planetary nebula during the formation of the
solar system. Other planetary systems are known with very different statistics. Notwithstanding
the fact that Bode’s law isn’t a law at all, planetary orbits are not arbitrary, but conform to a
In reflecting on the significance of physical laws, we need to know which fall into the category of
frozen accident, like Bode’s “law,” and which are in some sense “true” laws. This may not be
straightforward. The electromagnetic and weak forces were originally considered to be separate
and fundamental, described by Maxwell’s theory and Fermi’s theory respectively (the latter was
recognized as an unsatisfactory first step, but it was widely supposed that a better theory could be
formulated). In the 1960’s these two forces were shown to be in fact part of an amalgamated
electroweak force, whereupon Fermi’s account of the weak force was revealed as merely an
effective theory, approximately valid only at low energy. As the energy, or temperature, is raised,
so the two forces converge in their properties. Similarly, the so-called grand unified theories, or
GUTs, that combine the strong, weak and electromagnetic interactions, have the feature that the
forces merge in identity as the energy is raised (see, for example, Weinberg, 1992).
The general tendency for the forces to converge at high energies, and for the (relatively) low-
energy effective laws familiar in laboratory experiments to transform into a deeper, unified, set of
laws, has important consequences for cosmology. As the universe cooled from an ultra-hot initial
state, so the forces separated into their distinct identities, with each force being describable by a
low-energy effective law that conceals the “true” underlying unified laws. The transition to low-
energy effective laws comes about as a result of symmetry breaking, for example, by the Higgs
mechanism (for a popular account of the Higgs mechanism, see for example Krauss, 1994). This
introduces a random element into the low energy physics that can create a cosmic domain
structure. In the case of the symmetry breaking that splits the weak and electromagnetic forces
into distinct entities, the random element involves only a locally unobservable phase factor, and
does not affect the form of the electromagnetic and weak force laws. But generically, spontaneous
symmetry breaking will serve to determine the form of the low-energy effective laws, for
example, in the case of grand unified theories that split at low energies into three or more forces
with different gauge symmetries (see, for example, Randall, 2005, Ch. 11). In some models,
random spontaneous symmetry breaking may also determine the values of particle masses and
coupling constants.
The recognition that the laws of physics operating in the relatively low-energy world of everyday
physics may be merely effective laws and not the “true,” fundamental, underlying laws, has led to
a radical reappraisal of the nature of physical law. To use Martin Rees’s terminology (Rees,
2001), what we thought were absolute universal laws might turn out to be more akin to local by-
laws, valid in our cosmic region, but not applicable in other cosmic regions. (A cosmic region
here might encompass a volume of space much larger than the observable universe.)
attention in recent years (Barrow and Tipler, 1986; Rees, 2001; Susskind, 2005; Davies, 2006,
Carr, 2007). Expressed simply, if the observed laws of physics had been different in form,
perhaps only slightly, it is likely that life would be impossible. For example, if gravity were a bit
stronger, or the mass of the electron a bit larger, then certain key processes necessary for the
emergence of life may have been compromised. We can imagine the laws of physics being
different in two ways. The form of the laws might have been otherwise (e.g. the equations of the
electromagnetic field could have contained a nonlinear self-coupling term or a mass term), and
the various “constants of nature,” such as the fine structure constant, might have assumed
different values. Physics as we currently understand it contains between twenty and thirty
undetermined parameters, whose values must be fixed by experiment. These include the
parameters of the Standard Model of particle physics, for example, quark and lepton masses, the
coupling constants describing the strengths of the fundamental forces, and various mixing angles.
If cosmology is also considered, then there are additional undetermined parameters, such as the
value of the density of dark energy and the amplitude of primordial density fluctuations that
seeded the universe with large scale structure (Tegmark et. al. 2006). The presence of life in the
universe seems to depend rather sensitively on the precise values of some of these parameters,
and less sensitively on others. That is, had the values of some parameters differed only slightly
from their measured values, then the universe may well have been sterile.
One way to envisage this is to imagine playing God and twiddling the knobs of a Designer
Machine: turn this knob and make the electron a bit heavier, turn that and make the weak force a
bit weaker, all else being left alone. Then it seems that some knobs at least must be rather finely
tuned so that the associated parameters lie close to their observed values, or the tinkering would
prove lethal.
Many examples of fine-tuning have been discussed in the literature (e.g. Barrow and Tipler
2006), so I will briefly mention only one here by way of illustration. Life depends on an
abundance of the element carbon, which was not present at the birth of the universe. Rather, it
was manufactured by nuclear fusion reactions inside the massive stars that first formed about half
a billion years later. Stars like the sun burn from the fusion of hydrogen to form helium, but there
is no nuclear pathway leading from helium to carbon via two-body interactions (the relevant
isotopes of beryllium and lithium are highly unstable). What happens instead is that three helium
nuclei fuse to form a nucleus of carbon. Because a triple encounter is involved, the statistics of
the reaction look very unfavorable. However, by good fortune there is an excited state of the
carbon nucleus which produces a strong resonance in the capture cross-section at just the right
energy, opening the way for the production of abundant carbon. History records that Fred Hoyle,
guessing that such a resonance must hold the key, pestered Willy Fowler to confirm it by
The position of the carbon resonance depends on the interplay of the strong and electromagnetic
forces. If the ratio of the forces were different, either way, even by a few per cent, then the
universe would be starved of carbon, and life may never have arisen (Barrow and Tipler, 1986).
Hoyle was so struck by this apparent coincidence that he later described it as if “a super-intellect
has been monkeying with the laws of physics” (Hoyle, 1982). Today we know that the force
binding together the nucleons in carbon is not fundamental, but a by-product of the strong force
would require an elaborate lattice QCD calculation, including the electromagnetic contributions
to the masses. The relevant “tuning parameter” would no longer be the phenomenological
coupling constant between nucleons versus the fine-structure constant, but the parameters of the
Standard Model, including the Higgs mass. As this calculation has not been done, it is not
possible to know how sensitively the carbon production will depend on these parameters.
The foregoing point raises the difficult question of which parameters are truly independent and
therefore separately “tunable,” and which might be linked by a deeper level of theory. For
example, in the days before Maxwell, the electric permittivity of free space, the magnetic
permeability of free space and the speed of light were regarded as three undetermined parameters
eliminates one of them by expressing the speed of light in terms of the other two. In the same
vein, one wonders how many of the twenty-odd parameters in the Standard Model of particle
physics are independent. It is expected that at least some of them would be linked at a deeper
level through a unification scheme that goes beyond the Standard Model, such as one of the
Grand Unified Theories (see, for example, Greene 2000). Of greater significance for the present
essay is whether all of the parameters will ultimately turn out to be inter-dependent. Some
string/M theory advocates believe that a full understanding of the theory would reveal a unique
solution in which there are no free parameters: everything would be fixed by the theory. I shall
Four explanations for cosmic bio-friendliness have been discussed in the literature.
A. It is a fluke. The laws of physics just happen to permit the existence of life and
consciousness, and nothing of deep significance can be read into it, because life and
consciousness themselves have no deep significance. They are just two sorts of
B. Multiverse. The observed universe is but one among very many, each possessing
different laws, perhaps distributed randomly among the cosmic ensemble, or multiverse.
The observed laws are in fact local by-laws which, purely by accident, happen to favor
life. In other words, we are winners in a cosmic lottery. Obviously we would not find
ourselves located in a cosmic region incompatible with life, so the bio-friendly character
of the observed laws is simply the result of a straightforward selection effect, sometimes
C. Providence. The universe is fit for life because it is the product of purposive agency.
The agent might be anything from a traditional god (or gods) to a universe-creating super-
civilization in another universe, or another region of our universe. A variant on this theme
is that the universe is actually a deliberately engineered simulation (e.g. a virtual reality
show in a supercomputer).
universes, loops in time, retro-causation, and variations of the so-called strong anthropic
principle, in which the emergence of life and mind is built into the nature of physical law
in a manner that makes observers (or at least the potential for observation) inevitable.
I shall now briefly examine each proposal A – D in turn.
Einstein once remarked that the thing which most interested him was whether “God had any
choice in the creation of the world.” In other words, could the universe could have been
fundamentally different from what it is, for example, by having a different law of gravitation, or
massive photons, or neutrons lighter than protons? If the universe could have been different, then
it would raise the question of why it is as it is, i.e. why the laws of physics are what they are. And
in particular, one would want to know why those laws are so weirdly bio-friendly. Some
advocates of the NFP theory think the answer to Einstein’s question is no. There is only one
possible universe, and this is it (see, for example, Gross, 2003). If so, the fact that the one and
only universe permits life and consciousness would simply be a bonus, a fluke of no significance.
How seriously can we take the claim that there exists a unique final theory, even if such a theory
has not yet been exhibited? Is this just promissory triumphalism? In its strongest form, the claim
is clearly false. We can easily describe other universes that are logically possible and internally
self-consistent, but are not descriptions of the observed universe. Indeed, it is the job of the
theoretical physicist to construct simplified models of the real world chosen for their
mathematical tractability. These models capture some aspect of reality, but they are only
impoverished descriptions of the observed universe. Nevertheless they are possible worlds. For
example, it is common practice for theoretical physicists to use models that suppress one or more
space dimensions. I myself worked a lot on quantum field theory in one space and one time
dimension (Birrell and Davies, 1981). One example I considered was an exactly soluble two-
dimensional non-linear quantum field theory called the Thirring model (Birrell and Davies,
1978). It describes a possible (rather dull) world, which is clearly not this world. Another popular
impoverished model is general relativity in three spacetime dimensions, i.e. a world of two space
dimensions in which gravitation is the only force and classical mechanics applies.
It’s not necessary to consider radically different universes to make the foregoing point. Let’s start
with the universe as we know it, and imagine changing something by fiat: for example, make the
electron heavier and leave everything else alone. Would this arrangement not describe a logically
possible universe, yet one that is different from our universe? To be sure, there is much more to a
satisfying physical theory than a dry list of parameter values. There should be a unifying
mathematical framework from which these numbers emerge as only a part of the story. But a
finite set of parameters may be fitted to an unlimited number of mathematical forms. Most of the
mathematical forms will be ugly and complicated, but that is an aesthetic judgment. Clearly no
unique theory of everything exists if one is prepared to entertain other possible universes and
Many physicists would be prepared to settle for a weaker claim. Granted, there may be many
self-consistent unified theories describing worlds different from ours, but perhaps there is only
one self-consistent theory of this universe. Perhaps if we knew enough about unifying theories
we would find that only one knob setting of the Designer Machine (i.e. only one theory) fits all
the known facts about the world – not just the values of the constants of nature, but such things as
the existence of galaxies and stars, life and observers. It could be that there are many possible
NFP theories describing many possible completely-defined universes, but only one of those
theories fits all the facts about the actually-observed universe. An appealing embellishment of
this conjecture would be if the set of laws describing the observed universe is the simplest
possible consistent with the existence of observers. Needless to say, there is no evidence in our
It is important to realize, however, that even if something like the foregoing claim were true, it
would fall short of providing a complete and closed explanation of physical existence. One could
still ask why, from among the multiplicity of logically possible universes, both those described by
NFP theories and those described by non-NFP theories, this one has been “picked out” to exist.
Or, to use Stephen Hawking’s more colorful description, “What is it that breathes fire into the
equations and makes a universe for them to describe?” (Hawking, 1988) I shall return to this
Belief that an NFP theory will flow from sting/M theory remains an act of faith, since there is
neither a solution to the theory, nor even much of a hint about how to find one. Meanwhile,
perturbative solutions have been examined in some sectors of the theory, and they point strongly
against a unique solution, and more toward a vast multiplicity of different solutions. That is, the
theory predicts a stupendous number and variety of possible low-energy effective laws,
demolishing any hope that the observed world might be the unique solution of the theory, and
therefore the only possible string/M theory world. While the proliferation of apparent solutions to
string/M theory may be regarded by NFP believers as unwelcome, others have seized upon it as a
bang via a sequence of symmetry breaks, as this leads naturally to a cosmic domain structure. But
the richest form of multiverse follows from string/M theory if one accepts the existence of a
Susskind (2005), there are at least 10500 possible worlds on the string theory landscape. However,
the mere possibility of other universes with other laws does not mean that they actually exist. To
Cosmologists agree that the universe began with a big bang. It was either a natural event or it was
not; if it was not, then it would be beyond the scope of science to explain it. If it was a natural
event, then it makes little sense to insist it was unique, for what law-like physical mechanism is
restricted to operate only once? The very early universe was dominated by quantum mechanical
universe in a big bang. A finite probability implies that big bangs will have happened many (even
an infinite number of) times, that is, quantum mechanics automatically predicts a multiverse of
big-bang-initiated universes. A specific model of how this might occur is given by the eternal
inflation model, according to which our universe originated by nucleating from an eternally
1990). The eternal inflation theory predicts that other bubbles exist in other regions of the
superstructure, and at earlier and later times, forming an ending assemblage of “pocket
universes,” each starting out with a big bang and following its own evolutionary pathway.
Quantum uncertainty demands that the initial states of the bubbles are not identical, but are
distributed (with some as-yet unknown probability measure) across the space of all possibilities,
e.g. across the sting/M theory landscape. The low-energy physics and the distribution of matter
and energy in the pocket universes will therefore differ from one to another. Mostly the bubbles,
or pocket universes, are conveyed apart by the inflating superstructure faster than they can
expand, and so they do not intersect, although there is a tiny probability that one bubble can
nucleate inside another. In this manner, quantum cosmology provides a natural universe-
generating mechanism to populate the string theory landscape, or to instantiate whatever cosmic
The success or otherwise of the multiverse explanation of the Goldilocks effect depends on how
densely the ensemble populates the relevant parameter-space (i.e. the space spanned by “bio-
sensitive” parameters). If there is a rich selection of possible low-energy effective laws (e.g. with
closely-spaced possible values of particle masses, force strengths, etc.) then there will be many
universes with laws and parameter values that permit life. The string theory landscape model
seems well suited to this scenario. A possible statistical test of the multiverse explanation follows
if one makes the additional assumption that within the set of all life-permitting universes, ours is
a typical member (Weinstein, 2006). We would then expect the measured values of any
biologically relevant parameters to not lie in an exceptional subset of the parameter range.
The above point can be illustrated with the help of a simple analogy. The Earth’s obliquity (the
tilt of its spin axis relative to its orbital plane) is about 23o, a configuration that produces
interesting but not vicious seasonal variations. The seasonal cycle is an important driver of
evolution, but a much bigger obliquity, closer to 90o, would disrupt complex life. So something
between, say, 15o and 30o is probably optimal. The fact that Earth’s obliquity has a typical value
lying in the desirable range is no surprise, because otherwise complex intelligent life forms would
not have evolved here. So there is no justification in seeking any deeper significance in the actual
value of the obliquity. Now Earth’s obliquity is within a few per cent of the number π/6 radians.
Had it been indistinguishable from π/6 to, say, 6 significant figures, we would be justified in
concluding that it was not a typical value in the range needed to permit complex life to evolve,
but in an exceptional subset of that range, and we would be justified in seeking a deeper physical
theory that might yield π/6 exactly for reasons unconnected with a biological selection effect.
The fact that the multiverse hypothesis is vulnerable to falsification in this statistical manner
qualifies the theory for the description “scientific,” even though we may never, even in principle,
6. Providence
A straightforward explanation for why the universe is fit for life is that it is the product of
deliberate engineering, i.e. that an agent (or agents) picked judicious “knob settings on the
Designer Machine” so that life would emerge and sentient beings evolve. Note that this
explanation is very different from the claims of the so-called Intelligent Design movement,
throughout history, in order to “fix up” biological evolution. In the case of cosmological fine-
tuning, all phenomena can be consistent with a naturalistic explanation, i.e. the universe is still
subject to physical laws at all times and places, but the laws themselves are regarded as the
product of some sort of design. There are many variations on this theme. The simplest is to posit
the existence of a transcendent designer who creates a universe suited for life, as a free act, after
the fashion of the monotheistic creation myths (Holder, 2004). The drawback with this
explanation is that it is totally ad hoc, unless one has independent reasons to believe in the
existence of the designer/creator. It also raises the issue of who created/designed the designer.
Theologians have argued that God is a necessary being, i.e. a being whose existence and qualities
do not depend on anything else, and is therefore self-explaining (see, for example, Ward, 2005).
Few scientists, however, find the arguments for a necessary being persuasive. An added problem
is that unless one can also demonstrate that the necessary being is necessarily unique, the way lies
open for an ensemble of necessary beings creating an ensemble of universes. Not only is the latter
very far from traditional monotheism, it renders the creator beings redundant, for one might as
well posit an ensemble of unexplained universes ab initio, without the complication of attaching
Another version of providential design is closer to Plato’s demiurge than to the monotheistic
deity. It is based on the notion of baby universes that are a feature of quantum cosmology,
according to which universes can form, or nucleate, from other universes (see, for example,
Hawking, 1994, Smolin, 2002). It is then but a small step to the speculation that baby universes
intelligent agency residing in a “mother” universe. Such a civilization or agency would have the
option of designing the baby universe to be fit for life, by fixing the laws of physics and any free
parameters judiciously. Speculations about artificial baby universes have been made by Farhi and
Guth (1987), Linde (1992) and Harrison (1995), among others. One may envisage that
intelligence first evolves naturally, and then develops over an immense duration to the point
where a super-intelligence emerges with cosmic-scale technology, and manufactures our universe
with its life-encouraging potential. A variant on this theme, published by cosmologists Gott and
Li (1998), involves a causal loop: a baby universe loops back in time to become its own “mother”
A more extreme speculation is that our universe is not only artificial, but fake, i.e. it is a gigantic
The Matrix series of movies. The so-called simulation argument is popular among certain
philosophers (Bostrom, 2003), and has also been defended by some cosmologists (Tipler, 1994;
Barrow, 2003; Rees, 2003). It conforms naturally to the multiverse scenario: the step from a
multiplicity of real universes to a multiverse that includes both real and simulated representatives
is but a small – indeed inevitable – one. To be sure, human beings remain a long way from the
ability to simulate even rudimentary consciousness, let alone the conscious experience of a
sentient being inhabiting a coherent and complex world. But we may imagine that such ability
subset of universes within a multiverse. Because fake universes are cheaper than real ones, a
single real universe could spawn a vast number of simulations inhabited by a vast number of
sentient beings. According to how one does the statistics, it is easy to imagine that the fake
universes and their inhabitants will greatly outnumber the real ones, so that an arbitrary observer
is far more likely to inhabit a fake universe than a real one. This leads to the disturbing – some
might say ridiculous – conclusion that this universe is probably a fake! If one were to take such a
bizarre conclusion seriously, it would imply that the laws of physics are the product of intelligent
design, in the form of skillfully crafted software running on an information processing system in
qualities, is based on the notion that life and observers, and the underlying laws of physics that
permit them to emerge in the universe, are somehow mutually explanatory. The so-called strong
anthropic principle (SAP) is one statement of this inter-dependence (Carter, 1974; Barrow and
Tipler, 1986). It asserts that the laws of physics must be such that observers will arise somewhere
and somewhen in the universe. To use Freeman Dyson’s much-cited phrase (Dyson, 1979), “the
A link between the existence of living observers on one hand and the laws that permit their
emergence on the other is a tantalizing idea, but not without deep conceptual difficulties that go
to the very heart of the scientific enterprise. A founding tenet of physical science, dating at least
from the time of Newton, is the existence of a duality of laws and states. The laws of physics
normally have the status of timeless eternal truths that are simply “given.” By contrast, physical
states are contingent (on the laws and also on initial and boundary conditions) and time-
dependent. Thus, according to orthodoxy, the laws affect how states of the world evolve, but are
themselves unaffected by those changing states. There is a curious asymmetry here: the laws
“stand aloof” from the hubbub of the cosmos even as they serve to determine it. This “aloofness”
accords well with the strong flavor of Platonism running through theoretical physics. Most
physicists think of the laws as really existing, but in a realm that transcends the physical universe
and is untouched by it. It is a point of view inherited from mathematics. Plato envisaged a realm
of perfect mathematical forms of which the geometrical and arithmetical arrangements of the
physical world were regarded as but a flawed shadow. In the same vein, theoretical physicists are
wont to envisage the laws of physics as perfect, idealized mathematical objects and equations
So long as one is wedded to a Platonic interpretation of the nature of physical law, the strong
anthropic principle looks just plain ridiculous. Why should the laws of physics, which are
universal and apply to all physical systems, “care about” such things as life and consciousness?
In what manner does a very special and specific state of matter – the living state – serve to
determine or even constrain the very laws of the universe, in a manner calculated to ensure
cosmic bio-friendliness? If states and laws inhabit separate conceptual realms, then the laws are
what they are in the Platonic world, irrespective of which specific states may or may not evolve
A second serious problem with the strong anthropic principle concerns its teleological character.
The living state, let alone the conscious state, presumably emerge in the universe only after some
billions of years of cosmic evolution, yet the laws of physics are either timelessly determined, or
“laid down” (somehow!) at the time of the big bang. Even if one can accept some sort of coercive
link between life (and/or mind) and laws, how does the existence of the living state at later time
“reach back” and ensure that the universe starts out with the right laws and initial conditions to
bring life about billions of years later? What is the mechanism of this retro-causation? Orthodox
One may clearly conclude that the standard picture of physical law has no room for the strong
anthropic principle. However, the standard picture of Platonism and revulsion of teleology is
based on little more than an act of faith, and has the status more of a convenient working
hypothesis than an empirically tested theoretical framework. For example, the dualism of
timeless idealized mathematical laws and temporal contingent states enables the laws of physics
to be expressed in the form of differential equations, from which unique solutions follow by
imposing contingent initial and boundary conditions. The very basis of science hinges on this
convenience. Notice that to pursue the scientific project along these lines, one has to take
seriously the real number system, perfect differentiability, unbounded exponentiation, infinite and
infinitesimal quantities and all the other paraphernalia of standard mathematics, including
Platonic geometrical forms. These structures and procedures normally require an infinite amount
If this traditional conceptual straitjacket is relaxed, however, all sorts of possibilities follow. For
example, one may contemplate the co-evolution of laws and states, in which the actual state of
the universe serves to determine (in part) the form of the laws, and vice versa. Radical though this
departure may seem at first blush, it comes close to the spirit of the string theory landscape, in
which the quantum state “explores” a range of possible low-energy effective laws, so that the
late-time laws that emerge in a bubble have the character of “congealing” out of the quantum
triggering a region in which the laws change again depending on the quantum state within the
bubble. In the string theory landscape example, there is still a backdrop of traditional fixed and
eternal fundamental laws – e.g. the string theory Lagrangian – that escape the mutational
influences of evolving quantum states. It is only the low-energy effective laws that change with
time. A more radical proposal has been suggested by Wheeler, in which “there are no laws except
the law that there is no law” (Wheeler, 1983). In Wheeler’s proposal, everything “comes out of
higgledy-piggledy,” with laws and states congealing together from the quantum ferment of the
One motivation for considering the kind of looser picture of physical law suggested by Wheeler
comes from the burgeoning science of quantum information theory. The traditional logical
Thus, conventionally, the laws of physics form the absolute and eternal bedrock of physical
reality, and cannot be changed by anything that happens in the universe. Matter conforms to the
“given” laws, while information is a derivative, or secondary property having to do with certain
special states of matter. But several physicists have suggested that the logical dependence should
really be as follows:
In this scheme, often described informally by the dictum “the universe is a computer,”
information is placed at a more fundamental level than matter. Nature is treated as a vast
information-processing system, and particles of matter are certain special states which, when
interrogated by, say, a particle detector, extract or process the underlying quantum state
Wheeler’s pithy phrase “It from bit” (Wheeler 1994). Treating the universe as a computer has
been advocated by Fredkin (1990), Lloyd (2002, 2006) and Wolfram (2002) among others.
An even more radical transformation is to place information at the base of the logical sequence,
thus
The attraction of scheme C is that, after all, the laws of physics are informational statements. In
the orthodox scheme A, it remains an unexplained concordance that the laws of physics are
1960).
For most purposes the order of logical dependence does not matter much, but when it comes to
the informational content of the universe as a whole, one is forced to confront the status of
information: is it ontological or epistemological? The problem arises because what we call the
universe (perhaps only a pocket universe within a multiverse) is a finite system. It has a finite age
(13.7 billion years), and a finite speed of light that defines a causal region of about a Hubble
volume. Lloyd has estimated that this finite spacetime region contains at most 10122 bits of
information (Lloyd, 2002). A similar result follows from appealing to the so-called holographic
principle (‘t Hooft, 1993; Susskind, 1995). So even if we were to commandeer the entire
observable universe and use it to compute, we would be limited in the degree of fidelity of our
calculations. If one believes in a Platonic heaven, then this practical limit is irrelevant to the
operation of physical laws, because these laws do not compute in the (resource-limited) universe;
they compute in the (infinitely resourced) Platonic realm. However, if one relinquishes idealized
Platonism then one may legitimately ask whether the finite information processing capacity of the
Rolf Landauer for one believed so. He was a strong advocate of the view that “the universe
computes in the universe,” because he believed that “information is physical.” He summed up his
“The calculative process, just like the measurement process, is subject to some limitations. A
sensible theory of physics must respect these limitations, and should not invoke calculative
In other words, in a universe limited in resources and time – a universe subject to the information
bound of 10122 bits in fact – concepts like real numbers, differentiable functions, and the unitary
evolution of a quantum state – are a fiction: a useful fiction to be sure, but a fiction nevertheless.
The amplitudes α1and α2 are complex numbers which, in general, demand an infinite amount of
information to specify them precisely (envisage them written as an infinite binary string). If
information is regarded simply as a description of what we know about the physical world, as is
implied by Scheme A, there is no reason why Mother Nature should have a problem with infinite
binary strings. Or, to switch metaphors, the bedrock of physical reality according to Scheme A is
sought in the perfect laws of physics, which live elsewhere, in the realm of the gods – the
Platonic domain they are held by tradition to inhabit, where they can compute to arbitrary
precision with the unlimited amounts of information at their disposal. If one maintains that
information is indeed “merely epistemological,” and that the mathematically idealized laws of
physics are the true ontological reality, as in Scheme A, then infinitely information-rich complex
numbers α1and α2 exist contentedly in the Platonic heaven, where they can be subjected to
infinitely precise idealized mathematical operations such as unitary evolution. And the fact that
we humans cannot, even in principle, and even by commandeering the entire observable universe,
implication of the information bound (3). To repeat, A says: The universe does not compute in the
But if information is ontological, as for example in the heretical Scheme C, then we are obliged
to assume that “the universe computes in the universe,” and there isn’t an infinite source of free
information in a Platonic realm at the disposal of Mother Nature. In that case, the bound of 10122
bits applies to all forms of information, including such numbers as α1and α2 in Eq. (1), as well as
to the dynamical evolution of the state vector ψ. In general, a state vector will have an infinite
number of components, or branches of the wave function, expressed by the practice of describing
that state vector using an infinite-dimensional Hilbert space. If one takes seriously Landauer’s
philosophy and the 10122 bound, it is simply not permissible to invoke a Hilbert space with an
idealized complex numbers that require an infinite amount of information to specify. The
ambiguity in the operation of physical laws whenever our description of those laws approaches
the bound. For many practical purposes, the bound is so large that any ambiguity will be
insignificant. Problems arise, however, when exponentiation is involved, such as in systems with
deterministic chaos.
To take a specific example, consider the case of quantum entanglement, which lies at the heart of
the proposal to build a quantum computer. A simple case consists of a system on n fermions, each
of which has two spin eigenstates, up or down. Classically, this system has 2n possible states, but
entanglement. It is this exponential improvement that holds the power and promise of quantum
computation (for an introduction, see Nielsen and Chuang, 2000). By evolving the quantum state
“exponential” here is a warning flag, however. If the system contains more than about 400
particles, then the size of the Hilbert space alone exceeds that total information capacity of the
universe, so that even using the entire universe as an informational resource, it would not be
possible to specify an arbitrary quantum state of 400 particles, let alone model its evolution with
time. Does this render practical quantum computation a pipe-dream? Not necessarily. Although
an arbitrary quantum state of > 400 particles cannot be specified, or its unitary evolution
described, there is a (tiny) subset of quantum states that can be specified with very much less
information: for example, the state in which all coefficients are the same and in which a small
margin of error in the amplitudes is of no consequence. If the problems of practical interest enjoy
this compressibility, then the initial states of the quantum computer might be constructed to
within the required accuracy, allowed to evolve, and the answer read out. Note, however, that
during the evolution of the state, the system will in general invade a region of Hilbert space far in
excess of 10122 dimensions. Obviously no human system – indeed, no system within the
basis. But if one is a Platonist that doesn’t matter: the unitary evolution will run smoothly in the
Platonic heaven untrammeled by the 10122 bits bound operating within the physical universe. On
the other hand, if one adopts Landauer’s philosophy, then there is no justification whatever for
believing that the wave function will evolve unitarily through an arbitrarily large region of
Hilbert space. In general, unitary evolution will break down under these circumstances. What is
not clear is whether departures from unitarity, which would be manifested as an irreducible
source of error, would serve to wreck a practical calculation. It may not be too long before the
experiment can be performed, however, because entanglements of 400 or more particles are
Paul Benioff, one of the founders of the theory of quantum computation, has also examined the
perfect, idealized mathematical objects and operations that just happen to exist, and a physical
universe that just happens to appropriate a subset of those objects and operations to describe its
laws, Benioff (2002) proposes that physics and mathematics co-emerge in a self-consistent
manner. In other words, mathematics comes out of physics even as physics comes out of
physical process … Physical law, in turn, consists of algorithms for information processing.
Therefore, the ultimate form of physical laws must be consistent with the restrictions on the
physical executability of algorithms, which is in turn dependent on physical law.” This scheme, in
which mathematical laws are self-consistently emergent rather than fundamental and god-given,
automatically addresses Wigner’s observation about the unreasonable effectiveness of
Returning to the issue of the strong anthropic principle, the view of physical law expounded by
Wheeler, Landauer and Benioff, where laws are rooted in the actual states of the physical
universe, opens the way to a scheme in which laws and states co-emerge from the big bang, and
co-evolve, perhaps in the direction of life and consciousness. One way to express this is that the
state space of physical systems (phase space, Hilbert space) might be enlarged to include the
space of laws too. In this product space of states and laws, life could be distinguished as
something like an attractor, so that the universe would evolve laws and states that eventually
bring life into being, thus explaining the appearance of teleology in terms of the mathematical
properties of the product space. Of course this is nothing more than hand-waving conjecture, but
the post-Platonic view of physical law coming from the quantum information revolution will
clearly have sweeping implications for our understanding of the very early universe, the
properties of which include the unexpected suitability of the universe for life.
mathematical scheme, preferably deriving from an elegant and simple underlying principle,
which would provide a unified description of all forces and particles, as well as space and time.
Currently string/M theory and loop quantum gravity are popular contenders, although over the
years there have been very different proposals, such as Wheeler’s pre-geometry (Wheeler, 1980)
and Penrose’s twistor program (Huggett and Tod, 1994). Although full unification remains a
distant dream, many discussions of the prospect give the impression that if it were to be achieved,
there would be nothing left to explain, i.e. that the unified theory would constitute a complete and
closed explanation for physical existence. In this section I shall examine the status of that claim.
Proponents of NFP final theories argue that if all observed quantities are determined (correctly,
one assumes) by the theory, then theoretical physics (at least in its reductionistic manifestation)
would be complete. Such a completion was foreshadowed many years ago, somewhat
final theories, including many NFP theories, which describe universes very different from the one
we observe. So even with a NFP final theory at our disposal, one would still have to explain why
that particular theory is the “chosen” one, i.e. the one to be instantiated in a physical universe, to
have “fire breathed into it.” Why, for example, didn’t the Thirring model have fire breathed into
it? Why was it a unified theory that permits life, consciousness and comprehension that got
Which brings me to the vexatious question of what, exactly, performs the selection? Who, or
what, gets to choose what exists? If there is no unique final theory (which there clearly isn’t),
then we are bound to ask, why this one? That is, why did the putative final theory by hypothesis
describes the observed world get singled out (“You shall have a universe!”), while all the rest
The problem reappears in another guise in the multiverse theory. At first sight, one might think
that something like the sting theory landscape combined with eternal inflation would instantiate
all possible universes with all possible effective laws. But this is not so. Many unexplained
ingredients go into the string theory multiverse. For example, there has to be a universe
generating mechanism, such as eternal inflation, which operates according to some transcendent
physical laws, for example, quantum mechanics. But these laws, at least, have to be assumed as
mechanism: for example, one based on an analogue of quantum mechanics, but with the Hilbert
space taken over the field of the quaternions or the real numbers rather than complex numbers. In
addition, one has to assume the equations of string/M theory to derive the landscape. But it is
easy to imagine a different theory describing a different landscape. Remember, we do not need
this different theory to be consistent with what we observe in the real universe, only that it
At rock bottom, there are only two “natural” states of affairs in the existence business. The first is
that nothing exists, which is immediately ruled out by observation. The second is that everything
exists. By this, I mean that everything that can exist – everything that is locally possible –really
does exist somewhere. The multiverse would contain all possible universes described by all
possible laws, including laws involving radically different mathematical objects and operations
(and all possible non-mathematical descriptions too, such as those that conform to aesthetic or
teleological principles). Just such a proposal has been made by Tegmark (2003), who points out
that the vast majority of these possible worlds are inconsistent with life and so go unobserved.
Although observation cannot be used to rule out Tegmark’s “everything goes” multiverse, I
believe that very few scientists would be prepared to go that far. More scientists assume that what
exists, even if it includes entire other universes that will le forever beyond our ken, is less than
everything. But if less than everything exists, a problem looms. Who or what gets to decide what
exists and what doesn’t? In the vast space of all possible worlds, a boundary divides that which
exists from that which is logically possible but in fact non-existent. Where does this boundary
come from, and why that boundary rather than some other? Anthropic selection can help separate
that which is observed from that which exists but cannot be observed, but it can do nothing to
explain why that which does not exist failed to do so. Therefore, unless one adopts Tegmark’s
extreme multiverse hypothesis, we are still left with a metaphysical mystery concerning the
ultimate source of existence: why that which is favored to be selected for existence, even if it is a
multiverse containing of mostly sterile universes, contains a subset of universes that support life
and observers. The multiverse seems to offer progress in explaining cosmic bio-friendliness in
terms of a selection effect, but in fact it merely shifts the enigma up a level from universe to
multiverse. The vast majority of multiverses that fall short of the Tegmark ideal will be
multiverses that possess no universe in which the laws and conditions permit life. So the ancient
mystery of “why that universe” is replaced with a bigger mystery: why that multiverse?
10. Conclusion
In reviewing the various explanations for the laws of physics, I am struck by how ridiculous they
A. The universe is ultimately absurd. Its laws exist reasonlessly, and their life-friendly qualities
have no explanation and no significance. The whole of reality is pointless and arbitrary.
Somehow an absurd universe has contrived to mimic a meaningful one, but this is just a fiendish
physics in our universe are suited to life because they are selected by our own existence. The fact
that the laws of nature are also comprehensible is an unexplained fluke. The existence of a
universe generating mechanism, and a set of base laws, e.g. quantum mechanics, is also
unexplained.
C. Everything that can exist does exist. Nothing in particular is explained because everything is
explained.
D. The universe and its life-friendly laws were made by an unexplained transcendent pre-existing
God, or are the product of a natural god, or godlike superintelligence, that evolved in an
E. The universe, its bio-friendly laws, and the observers that follow from them, are somehow
and states.
It is hard to see how further progress can be made in addressing these ultimate questions of
existence, and in the end it may be necessary to concede that the questions have no answers
because they are ultimately meaningless. The entire discussion – indeed, the entire scientific
enterprise – is predicated on concepts and modes of thought that are the product of biological
evolution. The Darwinian processes that built our minds compel us to address issues of cause and
effect, space and time, mind and matter, logic and rationality, physics and metaphysics in certain
well-defined ways. Both religion and science proceed from these universal human categories and
we seem bound to seek explanations within their confines. It may well be that “explanation”
couched in these ancient modes of thought will inevitably fail to encompass the deepest problems
References
Barrow J.D. and Tipler, F.J. (1986). The Anthropic Cosmological Principle. Oxford University
Press, Oxford.
Birrell, N.D. and Davies, P.C.W. (1982). Quantum Fields in Curved Space Cambridge
Birrell, N.D. and Davies, P.C.W. (1978). Massless Thirring model in curved space; thermal
xxiv
Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, vol. 53,
Carter, B. (1974). Large number coincidences and the anthropic principle in cosmology.
Confrontation of Cosmological Theories with Observational Data, IAU Symposia No. 63, edited
Chuang, I.L. and Nielsen, M.A. (2000). Quantum Computation and Quantum Information.
Davies, P.C.W. (2006). The Goldilocks Enigma: why is the universe just right for life? Penguin,
London
Davies, P.C.W. (1992). The Mind of God. Simon & Schuster, London and New York.
Dyson, F. (1979). Disturbing the Universe. Harper & Row, New York, p. 250
Farhi, E. and Guth, A.H. (1987) An obstacle to creating a universe in the laboratory. Physics.
Gott III, J.R. and Li, L-X. (1998). Can the universe create itself? Physical Review D, vol. 58, p.
023501
Gross, D. (2003) Where do we stand in fundamental theory? In String Theory and Cosmology,
Ulf Danielsson, Ariel Goobar, Bengt Nilsson, August 14-19, Sigtuna, Sweden. Proceedings
published in Physica Scripta, The Royal Swedish Academy of Sciences Vol. T117, p. 102,
(2005).
Hawking, S.W. (1994). Black Holes and Baby Universes. Bantam, New York
Hawking, S.W . (1988). A Brief History of Time. Bantam, New York, p. 174
Hawking, S.W. (1980) Is the End in Sight for Theoretical Physics: An Inuagural Lecture
Hoyle, F. (1982). The universe: past and present reflections. Annual Review of Astronomy and
Huggett, S.A. and Tod, K.P. (1994). An Introduction to Twistor Theory. London Mathematical
Krauss, L. (1994). Fear of Physics: A guide for the perplexed. Basic Books, New York
Landauer, R. (1967). Wanted: a physically possible theory of physics, IEEE Sepctrum, vol. 4, no.
9, p. 105-109
Linde, A. (1992). Stochastic approach to tunneling and baby universe formation. Nuclear
Linde, A. (1990). Inflation and Quantum Cosmology. Academic Press, San Diego, California
Lloyd, S. (2006). The Computational Universe. Random House, New York
Lloyd, S. (2002). Computational capacity of the universe. Physical Review Letters, vol. 88, p.
237901
Mitton, S. (2005). Conflict in the Cosmos: Fred Hoyle’s Life in Science. Joseph Henry Press,
Washington
Smolin, L. (2002). Three Roads to Quantum Gravity. Basic Books, New York
Susskind, L. (2005). The Cosmic Landscape: String Theory and the Illusion of Intelligent Design.
Susskind, L. (1995). The world as a hologram. Journal of Mathimatical Physics, (NY) 36, p.
6377-6396
Tegmark, M. et. al. (2006). Dimensionless constants, cosmology and other dark matters. Physical
Review D 77 p. 23505
Tegmark, M. (2003) Parallel universes. Scientific American. (May), p. 31
Ward, K. (2005). God: A Guide for the Perplexed. Oneworld Publications, Oxford
Weinstein, S. (2006). Anthropic reasoning and typicality in mulitverse cosmology and string
Wheeler, J.A. (1994). At Home in the Universe. AIP Press, New York, p. 295–311
Wheeler, J.A. (1989). Information, physics, quantum: the search for links. Proceedings of the 3rd
Wheeler, J.A. (1983). On recognizing “law without law”. American Journal of Physics, vol. 51,
p. 398-404
Wheeler, J.A. (1980). Pregeometry: motivations and prospects. A.R. Marlov (Ed.), Quantum
Wolfram, S. (2002). A New Kind of Science. Wolfram Media Inc., Champaign, Ill