Anda di halaman 1dari 44

Manifesto of an Amateur Scientist

By John Winders

z ↔ z2 + c
Note to my readers:
You can access and download this essay and my other essays through the Amateur
Scientist Essays website under Direct Downloads at the following URL:

https://sites.google.com/site/amateurscientistessays/

You are free to download and share all of my essays without any restrictions, although it
would be very nice to credit my work when quoting directly from them.
Note: The figure on the cover page is a three-dimensional slice cut through a four-dimensional
Julia set. It is generated from a simple quaternion feedback formula: z ↔ z2 + c

In Is Science Solving the Reality Riddle? I raised some questions involving current scientific dogma
and tentatively tried to answer some of my own questions. By the time I finished that essay, the fog
of confusion had lifted a little, so I went ahead with Order, Chaos, and the End of Reductionism.
The purpose of this essay is to write down some ideas and opinions that have since crystallized in
my mind, before old age and senility make that an impossibility.
I want to state up front that I'm not anti-science. As a retired engineer, I take pride in the fact that
I've mastered at least some of the principles from the vast storehouse of accumulated scientific
knowledge. Engineers are the problem solvers, but our profession owes a debt of gratitude to the
scientists – the thinkers and dreamers – who have added so much to that storehouse. But even as
we use scientific reductionism to solve many day-to-day problems, this approach can only give us a
rough approximation of reality. The time has come for scientists to explore other avenues
knowledge that will induce nature to reveal herself on her own terms – not ours.
Here then, is a summary of my conjectures on these matters – Manifesto of an Amateur Scientist.
1. Quantum Physics and General Relativity Cannot Be Reconciled.
For the past 100 years, science has labored under the yoke of two completely incompatible theories
of reality: Albert Einstein's general theory of relativity, and quantum mechanics. Today, there is a
consensus among physicists that there should be, and there must be, one theory and one theory
alone that explains all things from the tiniest quarks to galactic clusters and beyond to include the
entire universe. An enormous amount of effort has been spent in trying to accomplish this by
making relativity look more like quantum physics, or making quantum physics look more like
relativity. The engineer in me asks, “Why bother?” Just a Newton's laws work exceptionally well
within their prescribed limits, general relativity and quantum mechanics also work exceptionally
well within theirs. These theories are incomplete, but do they have to be reconciled in order to
serve a useful purpose? Can they even be reconciled?
2. Solutions to the General Relativity Field Equations Are Approximations.
The field equations that Albert Einstein derived in 1915 are what I would call The (Almost) General
Theory of Relativity. Einstein brilliantly saw an equivalence between linear acceleration and a
gravitational field and used that as a first principle. Rotating motions were left as solutions to the
field equations, but there was no fundamental equivalence built in for them. Consequently, when
rotation is included in cases to be solved, anomalies and paradoxes appear in those solutions,
including naked singularities revealed by rotating black holes, and time loops in rotating universes.
On the other hand, an intrinsic state of spin (angular momentum) is incorporated into quantum
mechanics from the very beginning. The solutions to the field equations of relativity also seem to
break down at the Schwarzschild radius surrounding non-rotating black holes, and when
cosmologists apply them to solving the state of thee entire universe. I suspect (although I can't
prove) that those particular breakdowns occur because time and space are switching places.
3. Quantum Field Theory Cannot Be Derived from First Principles.
Quantum physics is hugely successful, in particular quantum field theory, pioneered by Richard
Feynman. Many predictions have been born out by experiment; most recently, the Higgs particle
was uncovered in the debris of proton-proton collisions in the Large Hadron Collider. This is an
amazing track record. However, the theory is still very much ad hoc. Part of it relies on a very
questionable mathematical procedure known as “renormalization” that subtracts infinity from
infinity to arrive at finite values. Another undesirable feature is the requirement of plugging
numerous physical constants into the field equations; those constants have arbitrary values that
cannot be derived from first principles and seem to bear no relationships to one another.
4. M-Theory Is Hopelessly Reductionist.
Physicists hoped that string theory (or M-theory as its proponents refer to it) would unify general
relativity with quantum mechanics. Physics is currently based reductionism, which hold the views
that the whole is always equal to the sum of its parts. This might be the case in a perfectly linear
universe where the wave functions of the parts interfere constructively and destructively to produce
the tsunami wave function that represents the whole. Unfortunately, the universe is only linear as
an approximation. M-theory is an extreme form of reductionism, taking the search for a unified
theory in the wrong direction.
5. The Effect of Space and Time Is To Censor Information.
In Reality Riddle, I made a conjecture that reality is made up of information. Everything that we
call “solid” is mostly empty space. Empty space isn't really empty. And forces, which seem so
powerful may simply be the tendency for the amount of information in the universe to increase.
Erik Verlinde has proposed a theory of gravity based on entropy, and entropy and information are
essentially the same thing. According to the holographic universe principle, the state of every
volume in space is encoded by bits of information on the two-dimensional surface surrounding the
volume. In other words, what we “see” in every volume is nothing more than the information
encoded on a screen surrounding it. That screen has a finite capacity for storing information. Every
observer is views the entire universe through a screen that surrounds the universe. Thus, by
reciprocity, the amount of information that is available to the observer is limited. Space and time
are incorporated into a mechanism that limits information that is knowable to an observer. Objects
that are separated from us in space are also separated from us by time. The only “now” for us is
right “here.” The farther away an object is, the less of its history is available to us.
6. Conflating Space and Time Is a Mistake – They Are Different Things.
The universe creates information in the Now. The Past is simply a record of all the information that
has ever been created. There is no information about the Future, so the Future does not exist; it is
only a projection in our minds of what might become, based on what is happening and what has
happened. We have the freedom to navigate among things that are scattered about us throughout
space. We cannot visit the Past – we can only see the records of the past that survive in the Now –
and of course we cannot visit the Future that doesn't exist. Therefore, we have only three degrees of
freedom in our universe, not four. Space is not time and time is not space; they're different things.
Hermann Minkowski invented the four-dimensional space-time continuum, and gave the temporal
dimension the “imaginary” label i, equal to √-1, in order to distinguish time from the three “real”
spatial dimensions. He had to make that distinction; otherwise, the mathematics wouldn't make any
sense. Einstein enthusiastically adopted Minkowski space and used it to express special relativity in
clean, elegant equations. He incorporated Minkowski space into general relativity, allowing gravity
to bend it like it were a sheet of rubber. Later, quantum physicists like Richard Feynman, used
Minkowski space into their models of particle interactions, showing particles from the “past”
merging with antiparticles from the “future.” But if space and time are qualitatively different
things, what happens when space-time becomes so distorted that time and space trade places? Then
the equations give you event horizons around black holes, naked singularities, and time loops.
7. Cosmology Lacks a Sound Theoretical Basis and Sufficient Observational Data.
The problem with cosmology is twofold: a) science doesn't yet have a true cosmological theory,
and b) observational information is limited and distorted. First, I don't believe general relativity can
serve as the basis for a cosmological theory because, as I stated earlier, it only gives approximate
answers. Attempting to find solutions to the field equations that represent the entire universe will
produce anomalies and paradoxes. A true cosmological theory is needed. Second, science lacks
sufficient observational data because the perspective of overall scale becomes distorted when
looking at the early universe through telescopes. Science currently accepts the premise of an
expanding universe, and I have no argument with that premise. Our telescopes should then reveal
smaller spaces between distant galaxies than spaces between nearby galaxies, simply because the
light from distant galaxies was emitted when those galaxies were closer together. But telescopes
don't show that, so our 3-dimensional view of the “past” is severely distorted. Suppose a new form
of radiation made it possible to “see” through the cosmic haze of the early universe all the way back
to the big bang event itself. My conjecture is that the big bang would appear to fill the entire sky –
a pinpoint object blown into cosmic dimensions. What then would we be looking at? Can our puny
three-dimensional brains understand the big bang event within our limited mental framework of
space and time? Are the field equations of general relativity up to the task of modeling the cosmos
without breaking down and producing mathematical gibberish? Can the current version of quantum
field theory be used to make predictions based on the conditions that existed during the first
moments of creation? I don't think so. Without a working theory and good observational data,
cosmology is, at best, science fiction.
8. The True Meaning of Second Law of Thermodynamics is Misunderstood.
The second law of thermodynamics has given scientists fits of depression for more than a century.
It seems to condemn the entire universe to a fate worse than death – heat death. In this final state,
nothing changes; nothing can ever change, because all the available energy has been used up. Woe
is us! But this state of mind stems from reductionist thinking. Entropy just measures the available
degrees of freedom of a system: S = k Log W, where S is entropy, k is Boltzmann's constant, and W
are the number of degrees of freedom, or microstates a system may have. The number W is
astoundingly large for most systems. Raise the temperature of one gram of water to the boiling
point. It has a total entropy of 7.35 Joule/ºK, which translates into W = 10 7.35/k. This number is
monstrously large: it is one followed by 10 23 zeros. If you wrote down that number as 1,000,000,
… etc., on letter-sized sheets of paper with five thousand zeros on each page, it would require a
stack of paper over one trillion miles thick. The universe has a voracious appetite for more degrees
of freedom; it is what causes the universe to expand, information to increase, and it is the very force
that drives creation forward. This is my version of the Universal Prime Directive behind every law:
“Every change maximizes the total degrees of freedom of the universe.”
9. Entropy and Information Are the Same Thing.
Claude Shannon was a brilliant engineer, who worked on breaking secret codes in WWII and
studied information in detail after the war at Bell Labs. He came to the conclusion that information
and entropy are the same thing, and I believe him. You can't tell whether information is “good” or
“bad” by looking at it. In fact, a good secret code is one where the message you're sending looks
just like random noise. In Order, Chaos, I wrote a little illustration about building a wall with the
“Mona Lisa” encoded on it. From an information theory perspective, it had the same information as
a random pile of bricks. At Bell Labs, Claude Shannon wasn't particularly interested in the content
of the information transmitted through cables or space. He just wanted to make sure messages got
through without any of the bits changing along the way. Entropy and information don't have
attributes like “good” or “bad.” Entropy isn't “negative information,” as some authors contend.
10. Creative Processes Are Emergent and Chaotic.
In the process of maximizing the total degrees of freedom, various processes emerge according to
the present state of the universe. A closed system will try to maximize the total degrees of freedom,
but it can only occupy ones that are available to it in its present state. As systems become more
complex, additional pathways open up to an increasing number of available states. These pathways
are modes of change – creative processes – and they are fundamentally nonlinear and chaotic.
Strangely, chaotic processes also produce order. Moreover, in the face of the randomizing influence
of entropy, order can arise only through processes of self organization. Every self-organizing
process requires three things: the system has degrees of freedom available to it, the system is in a
state of non-equilibrium, and the process possesses some degree of non-linearity.
11. We Live in a Fractal Universe.
My conjecture is that 3-dimensional space is, at least in principle, infinite, and it is a boundary
between order and chaos in a higher-dimensional space. It is where the creative forces are played
out in the universe. An infinite 3-dimensional fractal borderline can be created mathematically
using quaternions, which are 4-dimensional objects. I can offer no proof of this conjecture; it is
based mostly on the fact that fractals have self-similarity that repeats the pattern of the whole down
to the smallest dimensions of the fractal. If the universe were a 3-dimensional fractal space, it
would project fractal-like features down to the microscopic level, which is what is actually
observed. Currently, scientists believe there is a limit to the smallness of objects that can exist in
the universe – the Planck length. If true, this might place limits on both the complexity and the size
of the universe. Interestingly, the Planck length is given by the formula lP = √ ħ G / c 3, combining
three fundamental constants in nature. Could those constants be mathematically derived from the
fractal properties of the universe? If so, that would be a very nice proof of this conjecture.
12. Standard Evolutionary Theory Is Wrong.
The theory of evolution is misunderstood – by both its advocates and its opponents. The opponents
of evolution mistakenly invoke the second law of thermodynamics as proof that evolution would
only produce random outcomes. That would be true if the evolutionary process were random, but it
is not. Additionally, the second law only applies to closed systems, whereas all systems (except the
universe itself) are open. On the other hand, the advocates of evolution make the same mistake by
insisting that the process is, in fact, random. This is the reductionist paradigm, which is useful for
analyzing the workings of mechanical clocks, but is hopelessly inadequate for explaining the
complexity of the universe. When applied to evolution, reductionism piles one improbability on top
of another until the whole edifice of creation becomes an absurdity. Michael Behe, a professor of
biochemistry at Lehigh University, points out the improbability of certain cellular structures such as
flagella arising from random mutations. These structures require numerous sequential evolutionary
changes to the DNA molecule to produce them, but none of those individual changes produce
anything that could be called an advantage chosen through natural selection.
13. A New Paradigm of Evolution Is Required.
Fortunately, having to choose between creationism (or intelligent design) and reductionist evolution
is a false dichotomy. Evolution needs to be understood through the operation of chaotic processes,
especially when it comes to biological systems. The universe operates on a hierarchy of laws that
emerge as the state of a system, including the universe as a whole, changes. The “new” laws never
contradict the “old” ones; however, the effects of the old laws become secondary or tertiary as the
system evolves. Emergent laws and processes are not well understood, because they are chaotic.
Reductionist thinking tends to define laws and processes by their effects. Newton's law of
gravitation is defined entirely by a formula that describes the gravitational force: F = M1 M2 G / r2.
However, the entropic theory of gravitation stipulates that there is a tendency for massive objects to
increase entropy (and the total degrees of freedom in the universe). They accomplish this by
coming closer together. Chaotic processes cannot be defined by their effects, even if the effects are
orderly. However, chaotic processes are repeatable; a pseudo-random number generator always
generates the same string of pseudo-random numbers; a Mandelbrot set is always the same.
14. The Laws of the Universe Are Not Static – They Evolve.
Science operates on the assumption that the laws of the universe are the same everywhere and at
every time. The new evolutionary paradigm does not hold on to that assumption. Laws are best
described as processes of change, which may or may not be chaotic. While everything obeys the
Prime Directive of the Universe, it is possible or even likely that individual laws would be modified
or superseded as conditions change. If the big bang theory is correct, the state of the universe was
entirely different early in history. We have no way of knowing or even guessing which processes
prevailed at that stage of its evolution.
15. Life Is a Natural Process.
As awesome as life certainly is, it can be at least partially understood as being a natural, and maybe
inevitable, outcome of universal evolution. In short, life is a natural process. There may not be any
hope of reducing that process into a set of mathematical equations, but that doesn't mean life is
supernatural or that it defies scientific explanation. But I'm not saying that there isn't a bright line
between living and non-living things exists, as many reductionists would say. There is a very bright
line in my opinion, and life is much more than a collection of atoms or wave functions. There are
significant qualitative differences between a collection of amino acids and a living bacterium.
Again, it's a matter of a new set of laws emerging from complexity; living things are governed by
laws that operate independently from quantum physics and Newtonian mechanics, but they are still
in harmony with them; science doesn't yet understand these laws, nor can they be understood from a
reductionist perspective.
16. Consciousness Emerges from Material Complexity and Undergoes a Separate Evolution.
If there is a purpose or an “end state” of the universe, it may be the evolution of consciousness.
Consciousness seems to have emerged from the material complexity, which means that material
complexity is a sufficient condition for consciousness. The question is whether material complexity
is also a necessary condition. If so, that may provide the rationale for having a universe in the first
place. Consciousness has undergone, and may be undergoing an evolutionary cycle of its own.
Richard Maurice Bucke describes the various mental states of an evolving human race. He said that
until about 5,000 years ago, the majority of humankind existed in a state known as the bicameral
mind; a mental state completely different than what we would call a “normal” state of mind. Over
time, this state of consciousness was replaced by ordinary self consciousness. According to Bucke,
humans undergo an evolutionary process from infancy toward adulthood, which starts as simple
consciousness and ends in self consciousness. Beyond that level is cosmic consciousness, which is
probably the same thing as what Buddhists call satori. A small percentage of humans have attained
that state of consciousness, but Bucke believed it is becoming more common with each passing
century and he listed many of the known cases. Pierre Teilhard de Chardin goes even further; he
said that multiple human consciousnesses form the noosphere, a disembodied global consciousness,
which in turn evolves toward a universal end state he calls the Omega Point. Teilhard's vision of
the noosphere seems to have anticipated the world wide web that emerged in the technological age.
17. Consciousness May Surpass the Brain.
Based on what was stated in the previous conjecture, it remains an open question whether or not
consciousness can survive in a disembodied state. Reductionism would reject such a notion out of
hand, because the whole – consciousness – could not possibly be the sum of its parts – the brain
cells – simply because the whole wouldn't have any parts. However, if we accept the idea that new
chaotic processes emerge when the underlying order reaches a sufficient level of complexity, then I
don't think we can rule out the possibility that consciousness operates somewhat independently
from the neural networks where it originates, and it continues to function in some manner even after
the physical foundation is removed. Of course this gets into the realm of mysticism and
immortality, which is currently outside science; however, new post-reductionist principles and laws
may be discovered that will be incorporated into a scientific theory of consciousness.
18. Some of the Greatest Enemies of Science are Scientists.
There is a growing hostility toward science in the 21st century, especially in America. Part this is
fueled by religion, which is seen as being under attack by science. Part of this is driven by a
political agenda orchestrated by “captains of industry” who see science as standing in their way of
controlling, exploiting, and monetizing the Earth's precious resources. But some this is also due to
pronouncements made by scientists themselves, which either turn out to be completely false or only
partially true. It's bad enough that the vast majority of citizens and their leaders are scientifically
ignorant, but it's even worse that scientifically-educated people are doing such a lousy job of
communicating science to the general population. As an engineer who is fairly well acquainted
with scientific principles yet who is very much outside the scientific community, I can plainly see a
fair amount of “group think” among scientists. On one hand, they often refuse to let go of current
paradigms, even in the face of overwhelming contradictory evidence. History books supply many
examples this. On the other hand, when one of them comes up with a half-baked theory that sounds
good, many jump right on board without showing the slightest trace of skepticism. One example of
this is the ozone depletion scare that led to a global ban of chlorofluorocarbons (CFCs) in the 1980s,
causing a great deal of unnecessary economic disruption. This theory (at least the theory that was
presented to the public) was “voodoo chemistry” where individual chlorine atoms released by CFCs
destroy ozone molecules in the upper atmosphere, deplete the ozone layer, and expose everyone to
deadly ultraviolet rays. But in order for this process to work, the rogue chlorine atoms weren't
allowed to form any known stable compounds with oxygen; they had to be continuously recycled so
they could eat more ozone. This effectively turns those atoms into catalysts that drive the reversible
chemical reaction between oxygen and ozone only in the bad direction. Of course, such one-way
catalysts are an impossibility, and I explained why that is so in much more detail in Appendix A of
my essay Global Warming Is Real (Even if the Term “Greenhouse” is Bogus). Even if the ozone
layer really were being adversely affected by man-made chemicals, the chemists did a terrible job of
presenting a scientifically-credible theory to us, and I say shame on the scientific community for not
calling them out on this. With friends like these, who needs enemies? It's no wonder that there are
so many climate deniers and anti-evolutionists among us.
19. Science and Religion Will Someday Become the Same Thing.
I have a great deal of respect for atheists who are intelligent, like Neil deGrasse Tyson and the late
Carl Sagan, although I don't share most of their theological views. One of my favorite Tyson quotes
is, “God is an ever-receding pocket of scientific ignorance that's getting smaller and smaller and
smaller as time moves on.” He was referring to the “God of the gaps” hypothesis, which says
things we attribute to God are merely those things we don't understand scientifically. When Sir
Isaac Newton studied orbital motions involving more than two bodies, he concluded that the solar
system is unstable. This led him to propose that God must occasionally nudge the planets back into
their proper orbits. Pierre-Simon Laplace later analyzed this problem using better mathematical
techniques and he found that the planetary orbits are, in fact, quasi-stable, based on nothing more
than Newton's laws. After Laplace published his results, the emperor Napoleon asked him why he
hadn't mentioned God in the analysis. Laplace supposedly replied, “Sire, I had no need of that
hypothesis.” Laplace wasn't stating he was an atheist; he just didn't need to invoke God in order to
explain orbital mechanics. Yet I cringe when scientists make public pronouncements that God
doesn't exist. All that does is inflame passions and turns religious people into enemies of science
(and of progress). Also, you really can't say things like that if you're intellectually honest because
you can't prove a negative. I accept the premise that God created and sustains the universe, but I'm
referring to Thomas Jefferson's God, not the cruel, vengeful and capricious cartoon character that is
worshiped in some churches. When scientists who deny God are asked what existed before the
universe came into being, they are forced to state that the universe (in one form or another) always
existed. Religious people respond with something very similar when asked what came before God:
they say that God always existed. I hope that scientific study will finally uncover the Single Causal
Factor that created and sustains the universe; we will find that it has always existed and it's the
necessary and sufficient condition for everything; it will also be so profoundly obvious that nothing
else is needed to explain it. Isn't that a pretty good description of God?
20. The Human Race Is Effectively Alone in the Universe.

I think this the most important topic in this essay by far, because it influences our core philosophical
and religious outlooks, how we perceive our purpose and place in the universe, and the way we
relate to each other. Therefore, I'm going to devote several pages to this important topic instead of
summarizing it in a single paragraph as I have done in other parts of this essay. I tried to approach
this question logically in my essay Are We All Alone? Using empirical evidence, I came away with
the conclusion that we are effectively all alone. I broke the question down into three parts: 1) Are
there spiritual or supernatural beings watching over us and protecting us? 2) Are intelligent beings
from other planets actually making contact with us, or could they in the future? and 3) Is it possible
for us to make non-physical contact with other intelligent beings through radio communication
channels?

I can't find much empirical evidence one way or the other regarding the first question. Believing in
intelligent spiritual beings, benign or otherwise, are essentially articles of faith. These things
simply aren't amenable to study using the scientific method. The idea of spiritual or supernatural
beings who watch, guide, and sometimes intervene directly in human affairs seems to be very
appealing and accepting them as real is part of human nature and central to most of the world's
religions. In fact, it is belief in spiritual beings that separates religion from boilerplate philosophy.
Many scientists completely reject these ideas out of hand because of the lack of experimental proof,
but I think that may be a mistake. I think there may be some data to support the notion of a spiritual
aspect to our existence, although the data are rather weak. For example, some near death
experiences (NDEs) have been at least partially validated by medical workers who verified that
certain activities that NDE subjects could have only seen while “out of body” actually did occur.
Then there people who have provided details about previous lives, which were corroborated by
others and could only have been known by those people if they had lived those previous lives.
Based on this rather weak evidence, there seem to be higher moral or spiritual laws – in addition to
the known physical laws – that operate in the universe. Beyond that, I can't be absolutely certain
whether or not there are conscious spiritual entities who commune with us.

Concerning the second question, I've concluded there is no possibility of making physical contact
with advanced alien beings. This conclusion is based on logic, the physical laws of nature, and the
vast distances between us and the home planets of potential advanced civilizations. Making a
journey across these distances within the limited lifespan of an individual traveler would require
transport at nearly the speed of light, which would require stupendous amounts of raw energy. Even
if the technology for achieving that were possible, the journey would essentially be a one-way trip;
after reaching earth, travelers would forever be temporally cut off from their home planet.

Of course, the problem of interstellar space travel has been solved in science fiction novels. There
we can find inter-dimensional beings who can enter the fourth dimension and materialize anywhere
they choose, and “wormholes in space-time” that enable star travelers to bypass the normal physical
limitations imposed by special relativity. Oddly enough, there are even a few supposedly well-
educated physicists who embrace such ideas. Well, I hate to burst anyone's bubble, but “bypassing”
the speed of light limitation, while being very convenient, would also entail violating the laws of
causality. Nature simply will not allow that to happen – ever. End of story; case closed.

Leaving inter-dimensional beings and wormholes behind, what could possibly motivate intelligent
beings to undertake what is essentially a one-way journey? Only three things come to my mind:
1) to gather information about another civilization solely for the sake of knowledge itself, 2) to
satisfy a hunger to help a dysfunctional civilization like ourselves avoid self destruction and to
guide them along the evolutionary path toward a higher purpose, and 3) to undertake a campaign of
conquest to invade a habitable planet like earth and subjugate or exterminate the native population.
The first motive makes no sense because there is very little a hyper-advanced civilization could
learn from people like us who have just recently emerged from the bronze age. Plus, there would be
no way for those travelers to transmit any knowledge they could glean from us back to their home
planet in anything resembling a timely fashion. But if aliens are only motivated by a desire to help
us, then they must truly be altruistic, because they could never return to a “home” that is anything
like the one they left. Only the third ominous possibility makes any logical sense; but if alien
visitations have occurred and are occurring with invasion, conquest, and destruction in mind, then
those dire events would have already happened. Furthermore, none of the UFO conspiracy theories
make any sense to me, because it would involve an extraordinary level of secrecy and global
cooperation involving tens of thousands of people in every level of every government on earth.
Humankind has never accomplished anything close to that level of cooperation in the past and we
probably never will.

Concerning the third question, the evidence available to me strongly suggests that there relatively
few places in the Milky Way where advanced civilizations could exist. Our sun happens to be (at
the present time) in a relatively empty space between two spiral arms. It is only in places like this
where physical conditions are fairly benign. The vast majority of places in our galaxy are simply
too hostile for life to survive. Most star systems in our galaxy are comprised of first-generation red
dwarfs with planets (if any) that are comprised of hydrogen and helium. Those planets just don't
have the right chemistry for life. Many of the second- and third-generation stars are giants that burn
out too quickly; however, the long-lived medium-sized stars usually form multiple-star systems
where it's impossible to maintain stable planetary orbits in the so-called “habitable zones.”

The few remaining solitary sun-like stars that live in benign locations might have life-supporting
planets, and recently astronomers have indirectly detected some interesting candidates for those.
But then consider the earth, which enjoys the most favorable conditions possible for supporting life.
Even here, life has been nearly extinguished on a number of occasions over the 4-billion year
history of our planet. Mass extinctions usually leave some surviving primitive or microbial life
behind, but then it's a very long evolutionary road back to the more advanced life forms. How
many times over our sun's 8-billion year lifespan could evolution recur? Only very recently on
earth has a single species, Homo sapiens, emerged with enough intelligence and inventiveness to
begin to carry out interstellar communication. And based on mitochondrial DNA evidence, our
species was almost driven to extinction two times before we developed any advanced technology at
all. If humans did become extinct, would another species ascend to take our place? Maybe not.
After all, there doesn't seem to be any requirement for life to be technologically advanced. Other
animals, including our closest ape relatives, seem to be perfectly content living simple lives without
feeling any pressure to walk upright and develop large brains.

Based on all of this evidence, my guess – and I admit it's only a guess – is that there may be as few
as 10 communicating civilizations in the entire Milky Way. Most or maybe all of those few
civilizations would be scattered in those empty spaces between the spiral arms. The likelihood that
any of them are close enough to earth to communicate with us using anything resembling current
technology is vanishingly small. It's no wonder that the SETI Project is coming up blank.

So my answer to the question is this: Yes, effectively we are all alone. Strangely, having come to
this conclusion didn't make me discouraged or depressed. On the contrary, having the knowledge
that we're all alone helps me focus on my 7 billion brothers and sisters who share this little blue life
raft slowly drifting through the Milky Way. In my opinion, this is far better than wasting my time
hoping and praying that some supernatural or extraterrestrial beings will fly down and take care of
all our problems. One of the things that gives life some purpose is the realization that all we can
really depend on is each other.

21. The Origin of the Universe Is a Mystery That Science Alone Is Not Able to Solve.

One of the dictionary definitions of mystery is, “Any truth that is unknowable except by divine
revelation.” I think “unknowable” pretty well sums up the origin of the universe. Within our
particular universe, everything is linked through a chain of causation back through time. By
reversing time, we can trace every causal chain in our universe all the way back to a common
origin. At that point, the chains of causation simply end and that's where the mystery begins.
It's pretty obvious that the universe actually does have an origin. For example, we know the
universe is expanding. Assuming that the universe is finite (whether it's bounded or unbounded),
space is getting “bigger.” If we run the movie backwards, space gets “smaller” until space simply
runs out at some point; then we arrive at the origin, also known as the “big bang.” Physicists say
that the laws of physics break down at the big bang, which means the chain of causation as we
know it ceases to exist. Even if you don't know anything about astronomy or an expanding
universe, the second law of thermodynamics is all you need to prove there's an origin. As the
universe evolves, irreversible processes create entropy, which cannot be destroyed. This is the same
as saying that the universe accumulates information about itself while its history is being written.
Again, assuming the universe is finite, there can only be a finite amount of entropy. Consequently,
there was less total entropy yesterday than today, and there was less total entropy two days ago than
yesterday. Running the movie backwards, entropy monotonically decreases until we flat run out of
entropy and hit a zero-entropy void. Again, there is an origin with no prior history in sight.
(Physicists seem to be befuddled as to why the universe started out in a low-entropy state. To me, it
only makes sense that a universe with no history would contain very little information, and hence
very little entropy. I guess I'm too dumb to understand why anyone would think that's odd.)

Physicists hate the idea of a universe they can't explain, so they invent causes. Some of them are
pretty creative. Take Lee Smolin's evolutionary universe. According to his theory, our universe
emerged from a black hole in another universe, while other universes are born from black holes in
our universe. Natural selection chooses universes that tend to produce lots of black holes because
they produce more offspring than those that only produce a few. Since all such universes are
causally linked, this might seem to solve the problem of origins, but it doesn't. If mother universes
give birth to billions of daughters via black holes, the number of universes increases exponentially
with each successive generation. Working backwards through these chains of causation, the number
of universes diminishes exponentially until we arrive at The Mother of All Universes. The source
universe of all these chains doesn't have a mother, so we're not really any further ahead in solving
the problem of origins. An evolutionary universe model just kicks the can further down the road.

In fact, any theory that purports to furnish a sufficient cause for our universe will suffer the same
fate as Smolin's evolutionary universe model. The only way to avoid the problem is to accept that
our universe just doesn't have a cause in the normal sense. Stephen Hawking came close to
accepting it; he says the universe popped into existence as a random quantum fluctuation. But this
requires laws of quantum physics in order for that to happen, so where did those quantum laws
originate? Even proposing a quantum fluctuation in the beginning really doesn't solve the mystery.

Let me propose the following principle: A logically self-consistent universe must exist for no other
reason than it can exist. In other words, logical self-consistency is both the necessary and sufficient
condition for existence. Along with the weak anthropic principle, this principle also explains why
the fundamental constants of nature in our particular universe seem to be so finely-tuned for
supporting intelligent life. Since every conceivable logically self-consistent universe must exist, a
universe like ours that is capable of supporting intelligent life must exist.

22. Causality Underlies All Other Physical Laws in the Universe.

The statement that causality underlies all physical laws might seem trivial, but a closer examination
reveals some surprises. Conjecture 21 proposes that our universe, or any universe for that matter,
rests entirely on logically self-consistency. Causality is a fundamental requirement for a logically
self-consistent universe. As I stated in my essay Order, Chaos, and the End of Reductionism,
Nature will do whatever is necessary to prevent any attempt to bypass or circumvent causality.
(That is why Einstein's theory of general relativity is still incomplete – its field equations have
solutions that permit time travel into the past, which violates causality. If general relativity were
really complete, it would never allow such solutions to exist.)
The second law of thermodynamics is intimately tied to time and causality. The modern version of
the second law states that the total entropy of a closed system can never decrease. According to
Boltzmann's formula, entropy is equal to a constant, kB, times the logarithm of the number of
microstates that correspond to a macroscopic state; i.e., a state that can be specified by temperature,
pressure, density, etc. Claude Shannon, who founded information theory, realized that Boltzmann's
formula shows that entropy is directly proportional to the number of information bits that are
required to define (or bits contained within) a system. This led him to conclude that entropy and
information are one and the same. It's not just that entropy and information are similar, or they
share some of the same qualities; no, they're identical to each other. Some scientists seem
unwilling to take Shannon's intellectual leap, so they hedge a bit and say that entropy is “hidden”
information. Whatever. If information is hidden or encoded somehow, that doesn't change the fact
that it's still information. Since the universe is a closed system, its total entropy cannot decrease. If
you accept (as I do) that entropy and information are the same thing, this leads directly to the
corollary that information cannot be destroyed. How is that linked to causality? The law is simple:
What was done cannot be undone. Since information/entropy is a record of the past, destroying
information would entail erasing the past, breaking the causal chain between the past and present.
In other words, causality requires strict enforcement of the second law of thermodynamics.

This also solves the mystery of time. Why is time irreversible, always pointing in the same
direction, and why do humans “sense” the passage of time? Well, actually time is reversible at the
quantum level, where interactions between fundamental particles can go either in forward or
reverse. The arrow of time only exists at macroscopic scales where information is being generated
(and permanently recorded). The reason humans sense the passage of time is because our
individual consciousnesses are accumulating information as time moves forward. Our “sense” of
time is simply our ability to remember the past and our inability to remember the future.

Causality is manifested in other surprising ways. Recall the EPR paper and Bell's Theorem,
discussed at length in my essay Is Science Solving the Reality Riddle? EPR thought they had
discovered the big “gotcha!” that would nullify the interpretation of quantum physics of Niels Bohr,
who stated that quantum states are indeterminate until they are measured. The reasoning EPR used
in their paper was that Bohr's interpretation would allow, or actually require, faster-than-light
transmission of information between two entangled systems when one of them is measured, a clear
violation of causality. As it turned out, experiments based on Bell's Theorem proved that Bohr was
right. Ironically, EPR's reasoning actually proves that quantum states must remain indeterminate
until measured. Otherwise, quantum states would be predetermined through hidden variables, and a
communication device based on the EPR paper actually could transmit faster-than-light signals
between entangled systems. As it stands, Nature has arranged things so that an EPR device can
only transmit faster-than-light “signals” that are random and undecipherable, which contain no
information that could possibly alter history and violate causality. Can you appreciate the lengths
that Nature will go, even at the quantum level, to prevent us from violating Her law of causality?

23. Biological Beings (Including Humans) Are Not Designed Creatures.

It's apparent by reading Conjectures 12, 13, and 15, that I'm groping for answers concerning the
origin of life and a complete theory of evolution. I'm not a biologist by any stretch of the
imagination, but I've been doing a lot of thinking about these subjects lately, and I believe I may
finally be on the right track (refer to Appendix G that was recently added to my essay Order, Chaos,
and the End of Reductionism).

At some point in almost every public debate between someone who embraces materialism and
reductionism (usually a scientist) and someone who promotes theism and creationism (usually a
member of the clergy), a false dichotomy arises: “My opponent's views cannot explain X, Y, or Z;
therefore, you must accept my views as truth.” An average person who tries to evaluate such a
debate with an open mind is then forced to choose which person's views are least wrong. This is a
terrible choice in my opinion because both views are fundamentally flawed.

The theist/creationist often makes the mistake of invoking scripture. This automatically invites an
attack by the opposite party, who proceeds to point out numerous scriptural falsehoods and
discrepancies. On the other hand, the materialist/reductionist is often backed into a corner by not
being able to properly address the problem of an origin (see Conjecture 21, above) or trying to
defend the standard theory of evolution that is still incomplete.

One of the favorite arguments used by theists/creationists consists of the statement, “Since the
universe was created, there must be a Creator.” Beginning that sentence with the word since makes
this argument a logical fallacy known as “begging the question.” Making the assumption that the
universe was created is the same thing as assuming the conclusion that a Creator exists. There is no
objective, testable evidence that the universe was created. (I think a plausible alternative to a
created universe may be found in Conjecture 21.)

A somewhat “softer” version of creationism is intelligent design. In Conjecture 12, I mentioned


Michael Behe's analysis of certain biological features that he concludes can only have arisen by
design. Having thought long and hard about this, I now realize that his views are wrong. Instead of
pointing out the fallacies contained in Behe's analysis, I had unwittingly slipped into a reductionist
mind set that supports the conclusion that evolution is driven by forces that are essentially random,
thus creating a false dichotomy that supports Behe's ideas. It is now clear to me why the standard
version of evolution is wrong, but gaining that insight also obviates the need for intelligent design.

The critical insight came by looking at the function of DNA and considering how little information
is actually contained within the human genome. It is estimated that whereas over 1042 bits of
information would be required to replicate an entire human being, there are less than 1010 bits of
information contained in the 46 chromosomes in the nucleus of a human cell. This discrepancy of
32 orders of magnitude can be resolved by understanding that the genome isn't a human blueprint;
rather, it defines the process of human evolution from a single cell. Cellular differentiation
occurring in an embryo and producing various tissues and organs, requires very intricate feedback
loops, using chemical (and possibly electrical) communication among the cells through their cell
membranes, and triggering chemical <If …Then> logical operators embedded in the DNA itself.

This could also explain the phenomenon of embryonic recapitulation, where the human embryo
goes through various stages of development that seem to resemble pre-human life forms. If the
genome represents an assembly process instead of a complete blueprint, then the only way to arrive
at the end product is to repeat all the evolutionary steps between one-celled organisms and human
beings. Thus, the DNA code does not transcribe the final product – it defines the process that
assembles it. Since there is no master design – not even in the DNA code itself – there is no logical
necessity to have a Designer.

24. The Multiverse Theory is the Ultimate Cop Out.

It's become fashionable in the 21st century to propose multiple universes as the answer to every
unsolved problem of physics. I recently watched a YouTube video of Amir Aczel interviewing
Brian Greene, who is one of the most renowned physicists/string theorists/cosmologists on this
planet. Of course, Aczel is no slouch either, having written numerous books on mathematics, his
primary field of expertise, as well as physics. Aczel challenged the idea of an infinite universe,
pointing out that the universe isn't expanding into pre-existing space, but that space and time arose
from the big bang and space is continuously expanding; thus, the universe must be finite. Greene,
on the other hand, subscribes to the multiverse theory in which our “universe” is only part of an
infinite space along with other “universes” that continually pop into existence. This redefines our
universe and reduces it from being “all there is” to merely “all we can observe.”

The multiverse theory reminds me of an episode from the Seinfeld television series. In that episode,
George Costanza has lost his job, his apartment, and his girlfriend; he has moved back in with his
parents; his wardrobe consists of a shirt and sweatpants. The scene is in Jerry Seinfeld's apartment.

Jerry: “Again with the sweatpants?”


George: “They're comfortable.”
Jerry: “Do you know the message you're sending out to the world with these sweatpants? You're
telling the world, 'I give up.'”

Multiple universes are like George's sweatpants; they're comfortable. By embracing a multiverse,
you don't have to explain why this universe appears so finely-tuned to support life. In fact, you
don't have to explain how or why this universe began in the first place. It's nothing more than a
huge repository for ignorance like dark matter and dark energy; it's a place where physicists send
unsolved problems to make them disappear. It's not even a theory in the scientific sense; its just an
unproved, unprovable, and unfalsifiable idea. There's a name for a fact-free belief system based on
such ideas: religion. A multiverse is no more plausible than “God did it” for explaining how and
why our universe came into being. When Brian Greene and others articulate the multiverse
concept, the message they're sending out to the world is, “I give up.” It's the ultimate cop out.

I recently finished reading a book Hidden in Plain Sight by Andrew Thomas. I don't agree with
everything the author says in that book, but I do agree with his main point: The universe is
everything there is. Since the universe is all there is, there is no external measuring rod and no
external clock that Nature can use to scale distances and times within the universe. All that Nature
has to go on are relationships between objects inside the universe; in fact, there is only the inside
and no outside. Therefore, the universe is all relative; space and time do not exist as independent
objective entities, but only as a means for establishing relationships between objects. These are the
concepts that serve as the very foundation of Einstein's theory of relativity. We're approaching the
100th anniversary of the general theory of relativity as I write this in 2014. Yet it's as if some
physicists have learned nothing at all from his theory; they are still proposing infinite numbers of
“universes” co-existing within an infinite external domain of space and time. Well here's the
problem: You can have relativity or you can have a multiverse, but you can't have both.

It's as if we've flashed back to a time before 1887 when scientists were still arguing over the
physical properties of luminiferous ether. Today, scientists have proposed experiments to measure
the “curvature” of space-time and to detect gravity waves propagating “through” space. Well, here's
my prediction: When those space-time curvature and gravity-wave experiments are performed,
nothing will be found, just as the Michelson–Morley experiments failed to find any trace of
luminiferous ether. The reason why I'm confident about seeing negative results is that everything is
relative: Space-time is not a “thing” that can be stretched, bent, or twisted by gravity waves, and
there are no “straight” measuring rods you can line up with space-time to check its “curvature.”

By the way, the multiverse theory provides an ontological argument for the existence of God. In
fact, it provides an ontological basis for any god that can be imagined. The proponents of the
multiverse hoped they could use it to neatly sidestep the requirement of a Designer; ironically it
actually proves the opposite.

25. The Scientific Method Is Fatally Impaired by the Human Senses.

Anything that the scientific method can prove or disprove relies on the human senses. The list of
the human senses is usually limited to five: sight, hearing, smell, taste, and touch. I would add two
more senses to the list: space and time. Our sense of space is how we arrange objects in relation to
ourselves. It is closely tied to our sense of sight, and I will show they are both inherently limited.
Our sense of time involves perception of motion between “moments” and the accumulation of
information. Our mental concepts of space and time are inherently local and thus limited.

When we look at something, the only thing we actually “see” is a tiny area in the center of our field
of vision. If you stare at a distant object and extend your hands to the sides, you won't be able to
clearly see your fingers. To really “see” an object, the eyes must rapidly and randomly scan over it
and send tiny pieces of the object the brain, which stitches the pieces together into a composite
mosaic image. The brain is able to do this through the spatial sense. It knows that a face consists of
a nose located above a mouth with eyes on either side of the nose and a chin below the mouth. We
think we “see” a complete face, but our eyes only send small bits and pieces of a mosaic to our
brains in random order. What we “see” is mostly inferred from our spatial sense.

Our most advanced scientific instruments are just extensions of the seven senses. The Hubble space
telescope is an expanded version of our sense of sight, which is intimately tied to a localized sense
of space and time. If we point the Hubble at a quasar and measure its distance as 13 billion light
years, and point the Hubble at another quasar in the opposite direction and measure that distance as
13 billion light years, our brain assembles this information into a 3D diorama with two quasars 26
billion light years apart. But this picture is completely false. The light recorded by the Hubble
emerged from those two quasars when the universe was 13 billion years younger than today's
universe; therefore, those two images can only be separated, at most, by a couple of billion light
years. Our brains use Hubble telescope images to construct a false 3D Cartesian diorama existing
in the “now.” Large-scale reality is fundamentally distorted by our localized senses of space and
time. We have no other way to interpret the Hubble data because the brain is completely
constrained by its senses, which work together to project an inappropriate model of the cosmos.

The scientific method, which has served us well since Isaac Newton's time, is reaching the end of its
useful life. Probing farther and father into the cosmos provides us with nothing but a highly
distorted picture of reality due to the inherent limitations of our senses, which were optimized for
acquiring information about local objects and events. Probing further and further into the
microscopic realm only yields quantum weirdness, which nobody can fully understand through a
localized, macroscopic sense of space and time. Data from the LHC (large hadron collider) can
only enlighten us to the extent that we can force the data to conform with a model of reality that our
brains construct, which is a model intimately tied to – and limited by – our senses.

Even if string theory eventually emerges as the Theory of Everything (which I seriously doubt), the
scientific method will be utterly useless as a means of confirming it. As it stands, string theory is
based upon the existence of 1-dimensional objects on the scale of a Planck length vibrating in 10
dimensions, but seven of those dimensions are completely inaccessible to us. Probing down to
Planck scales would require stupendous amounts of energy that are simply not available to us using
anything resembling our current level of engineering. The time is coming when scientists must
finally concede that the scientific method has run its course because the human race has reached the
limit of what can be understood through our senses.

26. The Entropic Universe Has a Purpose.

I've essentially come around full circle to the first topic in this Manifesto; i.e., that quantum physics
and general relativity cannot be reconciled. I think I now know why – or at least I see a glimmer of
the answer. In my essay Teachings from Near Death Experiences, I explored the ongoing work of
Stuart Hameroff and Roger Penrose on quantum consciousness, which has spanned the past 20
years. There is fairly conclusive evidence that consciousness does not emerge at the level of the
synapses between neurons, nor does it consist entirely of brain-wave patterns. Consciousness is
actually manifested at a much smaller scale: within protein structures known as microtubules in the
cytoplasm of all living cells. Hameroff insists that even a single-celled animal, such as a
paramecium, exhibits a definite rudimentary consciousness without the benefit of a single neuron or
synapse.

According to the Hameroff-Penrose model, consciousness involves quantum entanglements that


have fractal properties that descend down to the unimaginably small Planck scales. Macroscopic
effects, which they call “moments of consciousness” occur when those entanglements decohere, on
time scales on the order of 1/40th of a second. The Libet experiments, which show that conscious
decisions precede the awareness of making those decisions, seem to substantiate this model. I
believe it is no accident that structures in living cells exist at the precise scales that bridge between
the atomic/subatomic quantum universe and the macroscopic universe. In order for life to fulfill its
main purpose, those two universes must remain separate. This is why quantum physics and general
relativity cannot and will not be fully reconciled, although they do not contradict each other.

But what is that purpose? What we perceive as linear time does not exist at the level of quantum
entanglement, where protoconsciousness occurs. Time emerges from entropy, which is strictly a
feature of the classical universe. The work by Erik Verlinde shows that the classical laws of motion
and gravity, along with how we perceive space and time, emerge from entropy. Examining
consciousness in the dream state can provide some clues to understanding of consciousness that
operates on the non-material plane and the purpose of the classical, material universe as it relates to
consciousness. Dreams often lack the usual sequence of causality. Dreamers sometimes know what
will happen next in a series of events, but they are powerless to change the outcome. In contrast,
waking-state “moments of consciousness” in the Hameroff-Penrose model become “grounded” in a
causal frame of reference – the entropic universe. According to the Hameroff-Penrose model, states
of consciousness certainly can and probably do occur without microtubules, cells, life, or a universe
for that matter. It's just that such a consciousness would lack qualia and purpose. That, in a nutshell,
is the purpose of life and why there is a physical universe as we know it: to enable conscious
thoughts to be carried out as actions that have irreversible effects, giving consciousness purpose
and meaning.

In Conjecture 24, above, I stated that the multiverse theory is a cop out. I believe this is true more
then ever. In the realm of quantum entanglement, anything and everything can exist
simultaneously. But in order for consciousness and free will to have any purpose, they must also
operate within the limits of a classical world of causes, effects, and consequences. A universe that
splits off into ever-bifurcating universes has no sense of morality or purpose in my opinion. There
is only one universe and there are no “do overs” here. A bad choice made today results in set of bad
alternatives in the future. You may be able to get back on the original path with a lot of effort, but
you can never erase what you did in a world of entropy. I raised the question of evil in my essay
Are We All Alone? Here's my latest opinion on this matter. Our sense of right and wrong, good and
evil are examples of qualia that result when a consciousness is grounded in an entropic universe
through the mechanism of life. Evil doesn't really have a purpose, per se. But I've come to the
conclusion that evil is unavoidable when intelligence and free will operate in a universe with
entropy. It's the price we humans must pay to be human.

27. The “Block Universe” Model is Incompatible with Free Will.

In the book Hidden in Plain Sight, by Andrew Thomas, the starting premise is that the universe is
all there is; there are no external yardsticks that provide absolute measurements of space or clocks
that provide absolute measurements of time. Chapter 3 is entitled “Space Is Not a Box” and
Chapter 4 is entitled “Time Is Not a Clock,” and up till that point I kept thinking, “Right on! Here's
a science author who really gets it!” Then I came to Chapter 5, entitled “The Block Universe” and
became completely befuddled. In that chapter, Thomas completely contradicted everything he was
telling us in the first four chapters.
In Chapter 5, a block universe is depicted schematically as a set of spacial coordinates (reduced
from three to two), with a time coordinate pointing perpendicular to the spacial coordinates. This is
supposed to represent “space-time” as a 3-dimensional block. Only two spacial coordinates are
used because because a real “space-time” universe with three spacial coordinates would be a four
dimensional block that would be difficult to depict in a book. Individual “events” in time are
scattered as dots within the block. Events taking place over time would trace lines inside the block.

There are a couple of problems with this model. First of all, the geometry of that block universe is
Euclidean, whereas the geometry of space-time, as defined in Special Relativity, is hyperbolic.
According to SR, if an object changes position, Δx, Δy, and Δz, over a time interval Δt as observed
from any reference frame, then the distance ΔS traveled through space-time is invariant – the same –
for all observers, where ΔS2 = c2Δt2 – Δx2 – Δy2 – Δz2. The minus signs in the expression for ΔS2
make the geometry of Minkowski space-time hyperbolic, not Euclidean. In order for objects to
trace out lines in a Euclidean block universe, like the one depicted in Chapter 5 of Hidden in Plain
Sight, all those minus signs would have to be changed to plus signs.

The second problem with the block universe model is that time is treated as just another dimension,
making the universe “eternal.” In other words, a Being having a god's eye view of the universe
could – at least in principle – navigate through time like it was a spacial dimension and observe the
future. This is very much like Laplace's determinism, where a Being with complete knowledge of
the present state of the universe could use the laws of motion to extrapolate the present state into
states in the distant past and future. But if such a thing were possible – even in principle – then free
will cannot exist. Near the end of Chapter 5, Thomas himself acknowledges the problem of free
will: “The idea that all of space and time is laid-out in one unchanging block might appear
unsavoury [sic] to some people as it appears to deny the possibility of free will.” Exactly.
However, a deterministic block universe doesn't just appear to deny the possibility of free will;
logic demands it. (In the 16th century, John Calvin rightly concluded that the possibility of an
omniscient Being eliminates the possibility of free will. Forced to choose between the Christian
God and free will, Calvin chose God. This is the basis of Calivinism's doctrine of predestination,
where human beings are essentially lifeless robots programmed to either accept or reject salvation.)

A block universe that allows the possibility of an omniscient Being “might appear unsavory to some
people” for a very good reason. It is that life is pointless when free choice is absent and everything
has been programmed into the universe. The mere illusion of making choices isn't the same as
making them, and therefore moral accountability is absent. How can we justify meting out rewards
and punishments without accountability? Forced to choose between an omniscient Being and
freedom, I'll choose freedom. So if God exists, I hope She's just as clueless about the future as I
am.

Conjecture #26, above, I stated that irreversibility generates entropy (information) and is necessary
for a purposeful existence. Irreversibility also brings about chaos and unpredictability, which
seems to be necessary for free will. A linear, deterministic block universe without any purpose or
free will is a just a machine – a dead universe.

28. Black Holes Only Exist in the Imagination.

Lately, the physics community is all abuzz over yet another apparent contradiction between general
relativity and quantum physics. Nature usually does a pretty good job of isolating quantum
mechanics from classical physics and/or patching over their differences. We don't need to calculate
the quantum wave functions of the Sun, Moon, and Earth in order to predict solar eclipses; nor do
we need to think about gravity when computing the scattering matrices of electrons. However, a
black hole (a hypothetical object) is where quantum mechanics collides directly with classical
physics, and this recently created a pretty messy situation known as the AMPS Firewall Paradox.
A while back, Stephen Hawking studied black holes from a thermodynamic perspective and
concluded that black holes have entropy, so an event horizon has a positive temperature and emits
black-body radiation. But he went further by stating that information is “lost” when an object falls
into a black hole. This caused quite a stir among physicists, notably the indomitable Leonard
Susskind, who insist that the conservation of information is an immutable law. After much hand
waving, it was decided that the information isn't really lost; it's encoded onto a black hole's event
horizon based on the holographic principle.

Then Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully (AMPS) uncovered an
apparent paradox involving Alice falling into a black hole, Bob staying outside the black hole, and
their quantum wave functions entangling (or not). This sent the physics community spiraling into
the so-called firewall crisis, which I admit I can't fully comprehend. I've always found the
description of a classic black hole to be very unsettling (see my essay Is Science Solving the Reality
Riddle – Trouble on the Horizon). According to popular science, the black hole's gravity literally
tears space apart at the event horizon, making escape impossible. Despite much hand waving by
some physicists, the black hole equations clearly show there really is a singularity at the event
horizon, where denominators are zero; distances and accelerations there are infinite. But why
would a singularity lurk in empty space? It sure looks like “spooky action at a distance” to me.

It turns out that a brilliant Indian physicist by the name of Abhas Mitra has solved the puzzle. Re-
examining the equations for black holes, Mitra discovered that you can indeed create a singularity
with an event horizon surrounding it. The trouble is, you can only do this if the black hole's mass is
zero! In other words, even though you can plug any large mass you want into an equation on the
blackboard, you can't have a black hole with mass in reality. I believe I found why the discrepancy
exists. When calculating the size of the Schwarzschild radius of a spherical object, you must
include all mass-energy everywhere, both inside and outside the Schwarzschild radius, including the
“empty” space surrounding the spherical object itself. Since the gravitational energy surrounding a
spherical object is negative, as it is squeezed down to the size of its Schwarzschild radius, the
enormously negative gravitational energy surrounding it cancels all the object’s original positive
mass-energy; thus, as a “black hole” forms, its effective mass-energy (and its Schwarzschild radius)
approaches zero.

So with a proper understanding of general relativity, we see that there's no such thing as an event
horizon surrounding a singularity (naked or otherwise), no Hawking radiation, and no information-
loss paradox. Problem solved. See how Nature always elegantly intervenes to invalidate our flights
of fancy that might violate Her fundamental principles? There is no need for quantum wormhole
entanglements, à la Susskind, or nasty AMPS firewalls because there is no way of making a black
hole in the first place.

Predictably, the physics community is not enamored with Mitra's ideas. After all, scientists have
spent almost 100 years contemplating black holes and basing really cool and radical theories on
them. But recent astronomical data on so-called black hole candidates (BHCs) show they have
stupendously large magnetic fields, which classic black holes simply cannot have. So it looks like
Mitra is right. Even the legendary Stephen Hawking himself recently cast doubts on the existence
of black holes, and has started calling them gray holes instead. Unfortunately, the absence of black
holes is bad news for many current cosmological theories, especially ones like Lee Smolin's
evolving universes, where mommy universes give birth to black holes that collapse into baby
universes; a kind of genetic selection process that results in universes with lots of black holes and
high probabilities of being friendly to intelligent life based on the anthropic principle [sic].

When elaborate thought experiments result in crazy paradoxes, these problems are often caused by
the misapplication of physical laws by humans instead of the laws themselves.
29. A Simple Feedback Mechanism Resolves the Goldilocks Enigma.

There are scores of so-called free parameters that constrain the physical universe, and it turns out
that changing any of these by a couple of percentages would radically alter physics and chemistry to
the point where biology, at least as we know it, would be impossible. It seems that these free
parameters are very finely-tuned to make it possible for us to exist, which seems kind of arbitrary
and odd. This is referred to as the Goldilocks enigma, and it deeply troubles scientists. It might be
that these free parameters aren't really free after all, and that changing one of them entails changing
all of them in such a way that physics remains life-friendly. However, this doesn't really solve the
enigma of why physics should be life-friendly in the first place. Or maybe there are a plethora of
ways that “life” could exist without biology as we know it, but that's hard to imagine.

Of course, the Goldilocks enigma gives creationists and intelligent designers all the validation they
need for spinning their magical, supernatural myths. Physicists and cosmologists can't do much
better; they took the easy way out by combining the multiverse concept with the anthropic principle,
et voilà, the Goldilocks enigma is explained. The horrible multiverse idea seems to have originated
with Hugh Everett's many worlds interpretation of quantum physics, inspired by Schrödinger's cat.
Unbelievably, a poll taken in 1995 among 72 “leading cosmologists and other quantum field
theorists” revealed that almost 60% of them actually believed in some form of multiverse. In
essence, this means 60% of leading scientists gave up on science and embraced an unfalsifiable
theory that relies on a circular tautology, i.e., we exist because we're here, and a false dichotomy,
i.e., free parameters either must have been selected through intelligent design (a proposal rejected
out of hand by science), or else they must have been a complete accident.

I found a more elegant way around the enigma based on a simple feedback mechanism. All you
need is one axiom and all the rest flows from logic. The axiom originates with quantum physics,
and while everything in our universe may not exhibit obvious quantum behavior, quantum physics
still rules the roost. Within the realm where quantum rules do apply, a fundamental law is that
nothing truly exists until or unless it is observed. For example, electrons are quantum particles, and
when they boil away from a hot metal cathode in a vacuum, they become free electrons and
accelerate toward a positively-charged phosphor screen. As free electrons, they exist only as wave
functions. A raw wave function, Ψ, has a complex mathematical form so Ψ is not observable, hence
electrons – as free particles – don't exist. For an electron to be observable, its wave function must
“collapse.” By that, we mean Ψ is multiplied by it its complex conjugate, Ψ*, to produce a function
comprised of real numbers that yields locational probabilities. Multiplying Ψ by Ψ* involves some
non-linear device, like a phosphor screen that collects and amplifies an electron's energy and
localizes it as an observable, visible dot. The rigid Copenhagen interpretation of quantum physics
would assert that the visible dot itself doesn't exist until some conscious observer actually comes
along and sees it. (Of course, this relates to the Schrödinger's cat dilemma, and I'm not getting into
that topic here – if you've read my other essays, you know how I feel about Schrödinger's cat.)

I have a much more relaxed interpretation of quantum physics than Niels Bohr and his colleagues,
i.e., quantum wave functions become real by rendering them observable, meaning that an object
could have been observed even if it wasn't. Based on this criterion, the dot on the screen is real
whether someone sees it or not. All classical, macroscopic objects are real because they are the
result of myriad quantum wave functions that have been rendered observable by countless nonlinear
interactions; thus, classical, macroscopic objects automatically satisfy the observability requirement
in an easy, natural way. The Moon remains in the sky even when you look away from it.

The proof of the feedback mechanism follows directly from ordinary logic:
1. For an object to exist, it must be observable (an axiom of quantum mechanics that objects do
not exist unless they are observed has been experimentally verified using Bell's theorem).
2. For the universe to exist, objects within it must exist, and those objects must be observable.
3. For objects to be observable, subject-object relationships must be possible.
4. For subject-object relationships exist, conscious subjects must be possible.
5. For conscious subjects to make observations of physical objects, a physically means of
sensing those objects must be possible; e.g., sensory nervous systems or their equivalents.
6. Therefore, for a physical universe to exist, physical objects and conscious beings that
observe them with sensory nervous systems or their equivalents must be possible.

Notice I didn't say the universe had to have conscious beings with nervous systems, only that it had
to be possible. Die-hard quantum physicists would have a serious problem with my “relaxed”
approach, and insist that actual biological observers are a requirement. It doesn't matter to me,
because the feedback process still works even if the Moon disappears when you stop looking at it.
In fact, John Wheeler originally came up with the feedback idea, including it in the Big U model.
Wheeler was about as hard-core as they come in insisting that actual observations are needed in
order to make things exist. In Wheeler's Big U universe, the appearance of conscious observers
literally creates the physical universe retroactively, bringing all the pre-biological stages, starting
with the big bang, into existence. Of course, I consider this to be a feedback loop run amok, with
biological beings acting as their own causal agents. This version of reality seems an awful like lot
like Genesis 1:1-31 where the physical universe suddenly popped fully-formed into existence. The
difference is that with the Big U, it was creatures like ourselves who popped it into existence.

All of existence can be summarized by this equation: Existence ↔ Observability + Causality

The double arrow indicates the feedback loop. Note that observability is a necessary condition for
existence, but it is not a sufficient condition. Therefore, causality was included in the equation to
bring order and sanity to the universe. Requiring causal agents assures that objects don't just spring
into existence based solely on being observable.

Engineers commonly use feedback to provide fine control and optimization, and the engineer in me
finds the feedback mechanism in Wheeler's Big U model very satisfying. The only remaining
question in my mind is whether the universe is truly optimized for life in a Goldilocks sense, or if it
is just barely able to support life. Well, we don't see life forms from other planets landing on the
White House lawn for breakfast with the president every morning. In fact, we don't see any
evidence of biological activity on other planets in our own solar system, although the jury is still out
on that question. But if you consider the dozens of free parameters that had to be set in order for
life to be possible, you'd have to say that this feedback mechanism was a pretty effective way for
tuning the system. So I'll go out on a limb and say that the universe we live in – the only universe
that really counts – is probably as good as it gets in terms of being life-friendly.

There is a sharp contrast between the feedback model and the prevailing multiverse models that are
embraced by 60% of leading scientists. Parallel universes, and various multiverse models,
including Lee Smolin's cosmological natural selection, all face the same obstacle. To reiterate,
physical laws with a large set of free parameters having special values are required for conscious
biological beings to appear naturally. Obtaining the “right” physics through chance alone would
require multiple trials, along with some unexplained mechanism that forces each trial to have a
variable (random) set of free parameters initially and then fixes those parameters as constants from
then on. If the initial values of those parameters are unconstrained, an unimaginably large number
of trials would have to occur to produce even a single “success.” Consequently, the vast majority of
trials would be wasted on dead, failed universes. It's clear that the principle of Occam's razor would
affirm a simple mechanism that virtually assures success on the very first try.

30. The Libet Experiment Does Not Prove Humans Are Cyborgs without Free Will.
In Conjecture 16 of this essay, I said that consciousness emerges from physical complexity (in the
form of nervous systems) and its evolution coincides with the evolution of the brain. Although this
may be true, Conjecture 29 got me thinking about the chicken-and-egg question of whether
consciousness really did arise from the physical universe or vice versa. I think the answer hinges on
the true definition of consciousness, referred to as the “hard question” in psychology.
My essay Teachings from Near Death Experiences explores whether consciousness is a physical
process or something non-physical underlies it. According to reductionism, it is unequivocally
physical because conventional science can't do non-physical experiments or come up with non-
physical theories. Thus scientists must conclude that consciousness is equivalent to thoughts and
emotions, and those can be reduced to automatic electrochemical responses to the brain's
environment, called brain waves, which can be observed using the EEG (electroencephalogram).
Thus, the brain can no more make decisions than a lung or a kidney can, so consciousness and free
will are illusions and humans are reduced to machine-like cyborgs.
It is claimed the Libet experiment proves the above assertion. In that experiment, a timer is placed
in front of a human subject. The subject is asked to push a button and take note of the exact time
shown on the timer when the decision was made to push it. The subject's brain wave activity is
continuously recorded by an EEG and correlated with the timer. In every instance, the EEG showed
the subject “decided” to push the button about ½ second prior to the subject becoming aware of
making the decision to push it. Scientists claim this ½ second time lag is “proof” that all thoughts –
including ones about making decisions – are actually nothing more than automatic electrochemical
brain processes that we have no control over.
In my view, the experiment proves no such thing. The fact that the subject thinks he made a
decision to push the button ½ second after brain wave patterns occurred does not prove that the
brain wave patterns made that decision. All it shows is that there is an inherent time lag in our
conscious awareness. But we already knew that; touching a hot stove induces motor reflexes before
we become consciously aware of feeling any pain.
The explanation of this experiment is that the subject (the consciousness observing thoughts and
making decisions) cannot observe itself, so the only way to be consciously aware that a decision is
being made is by observing brain wave patterns external to the consciousness itself. The brain
waves that the subject observes and the EEG records may simply be the brain preparing to send a
signal to the motor neurons to push the button. The consciousness becomes aware of those wave
patterns ½ second later and misinterprets this as “making a decision” to push the button.
We can easily demonstrate that the conscious mind can misinterpret brain wave patterns. Everyone
has had a creepy déja vu experience, when information being directed into the memory is
misinterpreted as information coming from the memory instead. This causes the feeling of
experiencing something in the present that has also occurred in the past. We know this is just an
illusion (contrary to certain New Age explanations) because we can never seem to pin down exactly
when these experiences occurred in the past because déja vu memories just don't fit within the
chronology of other memories. I'm convinced that a similar type of misunderstanding on the part of
the subject of the Libet experiment leads to a false conclusion that decisions are just automatic
physical responses that can be measured by an EEG. Of course, that interpretation relies on the
assumption that all thoughts must have a physical basis, so this is a classic case of circular logic. It
shows the need to weed out all hidden assumptions that are buried in our theories and experiments.
If consciousness isn't just some physical phenomenon in the brain, what is it? Science is usually
very reluctant to acknowledge non-physical things exist. Yet certain physical experiments only
make sense using quantum mechanics, which doesn't appear to be very physical in the usual sense.
My guess is that we might need some yet unknown non-physical process to explain consciousness.
31. Psi Phenomena Might Be Experimentally Confirmed as Quantum Entanglement.
Scientists sometimes lapse into a bad habit of thinking about physics as being frame-dependent, i.e.,
a preexisting space and time (or more correctly space-time) in which objects exist and events occur.
This, or course, is completely contrary to the principle of relativity where there is no universal
frame of reference; instead, every observer defines their own unique reference frame. Space and
time are needed only for establishing subject-object relationships. Stated bluntly, space and time
simply don't exist in the absence of subject-object relationships. Although the very idea of
“transcending space and time” gives material reductionists severe heartburn, that very thing does
occur with quantum entanglement and it has been repeatedly proven experimentally to be true.
My essay Is Science Solving the Reality Riddle? discussed experiments based on Bell's Theorem
and the Quantum Eraser in some detail. These experiments demonstrate the quantum states of
entangled particles as having exceptionally strong correlation, and that correlation actually does
transcend space and time. These experiments have to be carried out with exquisite care and
precision, however. Quantum entanglement is a very delicate state and it can be disrupted by the
slightest interaction with the environment, so it is difficult to maintain entanglement over any
distances and time. Furthermore, the ability to detect statistically significant correlations between
quantum states requires extremely delicate instruments and precise measurements. Nevertheless,
the experiments clearly reveal that in the world of entangled particle pairs, space and time really
doesn't exist – there is simply no subject-object relationship between them that can cause any space
and time separation. We see those particles separated from each other by space and time, but they
do not. There is one caveat concerning these experiments, however: They must not be carried out
in any way that could violate causality. In other words, there can be no possibility of sending
messages through space instantaneously or back to a previous time. If the experiment is set up in
any way that might violate causality, it is certain to produce negative results.
Turning to psi phenomena, it seems that most believers in psi phenomena aren't true scientists, and
most scientists aren't true believers. Consequently, much of what passes as “psi research” is either
haphazard and sloppily-executed by amateurs, or its success is undermined by science professionals
who have a strong negative bias against the possibility of achieving success. Refer to the remote
staring experiments independently carried out by Marilyn Schlitz and Richard Wiseman, discussed
in my essay Teachings from Near Death Experiences.
I'm convinced that if psi phenomena are real, they surely involve the physical brain in some
unexplained manner. However, the physical connection may be at a much deeper level than
electrochemical processes occurring in the synapses. If Roger Penrose's and Stuart Hameroff's
theory is correct, consciousness (or more likely subconsciousness) involves quantum-level
processes in sub-microscopic structures called microtubules within brain cells. In that case,
anomalous psi phenomena, which are impossible according to material reductionism, might be
experimentally confirmed as brain-to-brain quantum entanglement involving microtubules.
Just as Bell's Theorem and Quantum Erasure experiments must be carried out meticulously and
without any possibility of violating causality, psi experiments would have to be carried out with the
same level of scientific rigor. Psi experiments should not try to beam fully-formed thoughts from
subject to subject, or communicate messages instantaneously through space or backward in time –
any attempt to defeat causality is doomed to failure. Instead, experiments should look for
quantifiable, subconscious (non-subjective) responses in a subject that correlate with signals that are
randomly-initiated from remote subjects. I believe the Schlitz remote staring experiments could
serve as the model for these studies, although I doubt that achieving consistent, repeatable, positive
results will be easy. See Appendix D of Teachings from Near Death Experiences for some
innovative ideas about doing this.
32. Matter May Emerge from Consciousness Instead of the Other Way Around.
In Conjectures 16 and 17, I posited that consciousness emerges from material complexity,
whereupon it undergoes a separate existence and evolution. Based on information I've uncovered
lately, I may be forced to rethink my position by considering the inverse of that hypothesis is true,
namely that the material universe emerges from consciousness.
Everyone knows that ordinary matter planets and stars are made of consist of atoms, which are
made from elementary particles – quarks and electrons. At the level of elementary particles,
quantum physics reigns. In fact, you might say that elementary particles are purely mathematical
quantum wave functions, and these are no more more physical than cosines or logarithms.
Although physicists have a pretty good handle on the rules that govern quantum waves functions,
they can't tell us what quantum waves are made of. Nobody has ever measured, let alone seen, a
quantum wave directly. Their existence can only be inferred by using them to compute probabilities
that closely match indirect physical observations. String theory is no help at all. This is just an
attempt to turn non-physical quantum waves back into physical objects again, made of tiny
vibrating strings. But that just raises the question of what kind of matter makes up the strings
themselves. I find it kind of amusing that while modern physics furnishes laws that describe
physical phenomena in detail, physicists don't seem to have any idea what the most basic physical
elements consist of.
The same is true of psychologists and neuroscientists. These experts have sliced and diced human
consciousness into the id, ego, and superego, recorded and mapped brain-wave patterns, and
divided the brain into functional regions like memory, emotion, and intellect. Yet nobody knows
what consciousness really is. Of course, material reductionists will confidently supply a ready
explanation: Consciousness is nothing more than electrochemical wave patterns in the brain,
stimulated by sensory inputs from the environment over which we have no control. Nice try, but
that still doesn't explain anything – it just redefines the problem.
Then there is the artificial intelligence (AI) crowd, who insist that human consciousness is nothing
more than a set of behaviors and habits that will eventually be imitated or duplicated by machines.
In fact, a complete human personality could be uploaded from someone's brain into a machine and
kept alive there indefinitely, rendering that person immortal. (This was the basic premise behind
the movie “Transcendence” starring Johnny Depp.) AI experts believe that with sufficiently
complicated electronic circuitry and sophisticated software, a machine will pass the Turing Test and
render human brains obsolete within the next 10 to 20 years.
So it seems that quantum physicists, psychologists, neuroscientists, and AI engineers all have
something in common: They all are experts in highly-specialized fields involving some
fundamental element they can't fully explain. I'm thinking that each of these fundamental elements
is the same thing; i.e., consciousness, intelligence, and quantum waves are the same purely
mathematical “mind stuff.” And since all observable phenomena emerge from quantum wave
functions, all matter, energy, and forces may be made of mind stuff as well. Of course, this is what
Eastern mysticism has been saying all along, and it also lines up with John Wheeler's “it from bit”
model of reality.
I have just one difficulty in accepting the idea that the entire physical universe consists of mental
images: This idea tends to descend into solipsism, which I find disturbing. It's one thing to believe
that non-living objects like tables and chairs are nothing but figments of my own mind, but how do
other conscious beings fit into such a purely mental construction of reality? Or do they? If there
are myriad conscious beings, how is it that others are imagining more or less the same version of
reality that I am? Could it be that I am the only one who is imagining reality into existence?
Of course, scientists reject out of hand any notion that the physical universe is mentally-constructed
because there seems to be no way that science can test such an hypothesis. But maybe it's just
because nobody's really tried. In light of Bell's experiment and the delayed choice experiment, it
certainly seems that quantum physics is hinting at this possibility.
33. The Universe Is Not an Object and Is Therefore Inherently Unobservable.
The scientific method is based upon observation. Every observation involves a subject (the
observer) and an object (the observed). Cosmology is not science is because its practitioners are
trying to “observe” something (the universe) that is not an object. An object can be only be
observed from its exterior, and as stated numerous times in my essays, the universe doesn't have an
exterior. Someone might object by saying that I can observe my living room from its interior, so
why can't I observe the universe from its interior? The answer is that when you observe your living
room's interior, you are actually observing the floor, ceiling and walls as objects with exteriors, and
forming a mental picture of your living room as a box having an exterior delineated by those
boundaries. Unlike your living room, the universe doesn't have boundaries or an exterior.
More fundamentally, every observer is part of the universe and cannot be separated from it. No, I'm
not having a kumbaya moment or a New Age revelation. I simply mean that being literally part of
the universe makes it impossible for anyone to make a normal subject-object observation of it.
Since every observer has a unique perspective and all observers are simultaneously at the center of
the universe, there would have to be as many universes as there are observers in order for subject-
object relationships to exist. The consequence of this is that anything that scientists can ascertain
about the universe as a whole through the process of observation is going to be fundamentally
distorted and flawed.
Here's an example of what I mean:
Astronomers probe the universe using electromagnetic radiation over a wide range of frequencies,
from radio to light to gamma rays. The formula for an electromagnetic wave front propagating
through space is x2 + y2 + z2 – c2 t2 = 0. This equation defines a spherical surface expanding at the
speed of light in 3-dimensional (x-y-z) space as t increases. It also describes a 4-dimensional object
called a hyper-cone.
A hyper-cone is impossible for us to visualize, so in order to represent this in a 3-dimensional
diagram, one of the three spacial dimensions is eliminated: x2 + y2 – c2 t2 = 0. This equation
defines a circle expanding at the speed of light in 2-dimensional (x-y) space as t increases. It also
describes an object called a light cone, which reduces to a point at t = 0 and flares out as two cones
in opposite directions for | t | > 0. Radius = (x2 + y2 ) ½ = c | t |.
For t > 0, an observer's light cone encompasses everything in the future to which he is causally
linked, and for t < 0, the observer's light cone encompasses everything in the past to which he is
causally linked. Unfortunately, using light, or any other kind of electromagnetic radiation, an
observer can “see” only the thin sliver of his past universe that lies along the surface of his light
cone. What astronomers see (or seem to see) along these thin slivers using telescopes is a universe
that is absolutely flat and is filled with galaxies that are all receding from us. These observations
have led most astronomers to conclude that space itself expands over time, which has become the
standard cosmological model: At some point between 13 and 14 billion years ago, everything that
comprises “our” universe (everything that we are causally connected to) was compressed into a tiny
singularity smaller than a proton. The problem with this model is that while our light cone expands
into the past, our universe itself gets smaller. Logically, the surface encompassed by our light cone
at some point in the past must have been larger than the entire universe, which of course is
impossible. So while observations seem to show a universe that is both flat and expanding over
time, it can be either one or the other, but logically it cannot be both.
Nature seems to be playing games with us – presenting a picture of reality to us that is logically
inconsistent. But actually, Nature is doing the very best She can to provide us with information that
our puny brains can process. The problem isn't with Her, it's with us. We don't seem to grasp the
fact that the universe isn't really an object because it has no exterior; the universe is inherently
unobservable. So it's no wonder that any conclusions about the universe we draw through
observations are going to be logically inconsistent. The question in my mind is whether an
inherently unobservable universe can even be said to exist in the true sense of the word.
34. There Is Much More within Our Causal Patch than Meets the Eye.
As shown in the previous section, the universe as a whole is not an object. As Sir Arthur Eddington
once said, “Not only is the universe stranger than we imagine – it is stranger than we can imagine.”
It's wrong to think of space-time as an object; instead, space and time are degrees of separation that
are required by the laws of relativity and causality. A hyper-cone, x2 + y2 + z2 – c2 t2 = 0, defines
two causal patches. One patch, corresponding to t > 0, contains everything we can influence in the
future. The other patch, corresponding to t < 0, contains everything that can influence us from the
past. All our visual observations are limited to an ultra-thin sliver of reality lying along the outer
surface of the t < 0 causal patch. If a 4-dimensional hyper-cone is impossible to visualize, you can
still grasp the concept of causal patches by using only two spatial dimensions in the above equation,
which reduces down to two cone-shaped causal patches that can be drawn on paper.
When we look out into space, it seems like we are seeing a 3-dimensional diorama is filled with
stars and galaxies. But we're just looking along a surface with only two fully-spatial dimensions;
the third dimension has both space and time components. Instead of using x-y-z space coordinates,
it's best to visualize this with r-θ-φ polar coordinates along the surface of the hyper-cone. The r-
coordinate is the distance between us and objects we observe, and the θ- and φ-coordinates are the
angular distances between those objects. Being confined to the outer surface of a 4-dimensional
hyper-cone, θ and φ circumscribe a spherical surface in space, so the θ- and φ-coordinates are
spatial. But the r-coordinate combines both spatial and temporal dimensions. In other words,
looking outward along the outer surface of a 4-dimensional hyper-cone is also looking back in time.
[Using a conventional 3-dimensional cone, there would only be two polar coordinates left to
consider: r and θ. As θ rotates through 360°, it defines a circle in space, so θ is fully spatial,
whereas r, going from the point of the cone down its surface, is both spatial and temporal.]
Distant thunder emits sound waves that reach us along a path within the causal patch that is more
temporal than light's path along the hyper-cone's outer surface; this is why we hear thunder after we
see the lightning. Traces of our great-great-great grandparent's DNA follow mostly temporal routes
inside the causal patch, and influence us directly after several generations. We can't see dinosaurs
roaming the Earth by looking through a telescope along the outer surface of the hyper-cone;
dinosaur evidence emerges in the fossil record from deep within the causal patch.
Leonard Susskind's “minus first law of quantum physics” states that information cannot be erased,
which is just another way of stating the second law of thermodynamics. Thus, everything within
our t < 0 causal patch must leave indelible information that can ultimately be recovered here and
now – at least in principle. Although much information may appear hopelessly scrambled and
irretrievably lost, it can always be unscrambled by a sufficiently clever decoding device. Saying,
“Well, that information used to exist someplace else, but you won't find it here and now,” is
tantamount to saying that information was erased.
The 19th century occultist H. P. Blavatsky believed that such “lost” information is encoded in
something called “Akashic records.” Of course, ideas about non-physical records are judged to be
highly unscientific. But bear in mind that EM waves were also “non-physical” in the past, and most
scientists in 1860 would have judged ideas about radio transmission as being unscientific. Non-
physical and supernatural phenomena are accepted as physical and natural if scientists understand
them, develop successful theories about them, and apply physical laws to them.
In fact, I'd say that causality and the law of preservation of information seem to require that
everything within the t < 0 causal patch is superimposed and encoded into the here and now.
Maybe this is what the popular term “holographic universe” is all about.
35. The Consequences of Bell's Theorem Force Us to Examine Free Will.
If asked to name the most significant contribution to fundamental physics in the 20th century, I
would have to say it was Bell's Theorem. The implications of this elegant masterpiece are far-
reaching and profound, forcing physicists to examine free will and consciousness.
A fairly concise example of Bell's Theorem was presented in Appendix A of my essay Is Science
Solving the Reality Riddle? The underlying question is whether quantum properties are inherently
uncertain until they are measured, or if they are predetermined through so-called local hidden
variables. When experiments based on this theorem are carried out, the results are unequivocal:
Local hidden variables do not exist and quantum uncertainty is indisputable. In Appendix H of my
essay Order, Chaos and the End of Reductionism, I explored the idea that quantum uncertainty is
indistinguishable from classical chaotic determinism. In this model, measurements of up-down or
left-right of electron spins or photon polarizations are the {0,1} binary outputs of chaotic digital
algorithms running on tiny computers embedded inside those particles. However, in light of the
many experimental violations of Bell's Inequality, we must reject that model; those tiny embedded
computers would be, in effect, local hidden variables that are experimentally ruled out.
We are forced to accept quantum processes as inherently indeterminate, i.e. there are events that are
uncaused insofar as the usual meaning of causation is concerned. The concepts of space and time
are use to place events in the correct causal order. However, in a world without causation, space
and time are completely irrelevant. I think this essentially means a fundamental symmetry between
space and time exists in the quantum world. When that symmetry is broken, space and time emerge
together as distinct entities. This symmetry breakage begins at the scale of large, highly
interconnected systems, i.e. on “classical” scales.
One of the hallmarks of quantum processes is reversibility. The evolution of the wave function,
given by Schrödinger's equation, is determined by the Hamiltonian operator, which is linear and
fully bi-directional. This results in conservation of information. For example, an electron contains
a given quantity of information that can never increase or decrease. On the other hand, classical
systems mostly exhibit processes that are nonlinear, chaotic, and irreversible. This results in the
creation of entropy (information) that can increase but can never decrease. In the classical world,
everything is subject to the laws of causality, placing events in causal order according to the laws of
relativity based on the concept of space and time. In summary:
Quantum = Indeterminate, not ordered by space and time, completely reversible, with a constant
number of qubits of information, without a history.
Classical = Subject to causation, ordered by space and time, irreversible, with an increasing number
of bits of information over time and a history.
Many physicists have a severe mental block when it comes to drawing a bright line between the
quantum world and the classical world, and thus they keep trying to impose classical concepts on
the quantum world even when these concepts just don't apply. Some physicists have proposed
“superdeterminism” as a way to avoid quantum indeterminacy in light of the experimental
violations of Bell's inequality. In this extreme case of mental gymnastics, superdeterminalists
propose that we live in a four-dimensional block universe (BU), where each world line has been
laid out in detail for all eternity. This is a throwback to Laplace's “clockwork universe” where if the
present state of the universe were known with sufficient precision, then a supremely intelligent
being could derive every previous and future state. In such a universe, information would be
constant because all information about the present state of the universe has already existed in the
past. Hence, every process must be fully and completely reversible; otherwise, irreversibility would
cause the amount of entropy/information to increase. Past, present and future are thus only mental
concepts. There is only a single timeless reality that is completely deterministic, without any
possibility of altering a world line through the agency of “free will.”
The proponents of the BU pat themselves on the back for having invented such a clever “solution”
to the quantum/classical boundary problem. By making everything predetermined, the quantum and
classical worlds are exactly alike; thus, the boundary simply disappears, Case closed. It's a classic
example of the logical fallacy known as Appeal to Ignorance: If you are unable to disprove my
hypothesis, then you are forced to accept it!
This reminds me of the Omphalos hypothesis, which tries to explain away all the physical evidence
that the world is much older than the 6,000 years allotted to it by the Bible. The fact that we can see
light from stars millions of light-years away is easily explained: On Creation–Day IV, God placed
starlight in its current locations. Therefore, physical evidence cannot be taken as any indication of
the age of the universe. Of course on that basis, for all we know, God could have created the world
completely intact last Thursday, along with human memories and written historical records that
make it appear much older. Such is the “logic” of Appeal to Ignorance.
Well, I hate to break it to all the BU aficionados in the scientific community, but the BU model
won't hold water, and I can prove it. There is no room for irreversibility in the BU model, because
all processes reduce to a set of completely reversible Hamiltonian operators. And yet irreversible
processes are clearly visible everywhere you look.
Suppose I rapidly rub my hands together, then hold them in the air. The kinetic energy from
rubbing my hands together is transformed by friction into heat, which then radiates away into space.
That process is irreversible. No matter how long I hold my hands up in the air, heat will not radiate
back into my hands to be converted back into kinetic energy, causing my hands to move back and
forth. It's not just a matter that such a time reversal is “highly improbable,” as blithely explained by
the apologists of reductionism. No, it's utterly impossible.
So from where does irreversibility emerge? The answer is from nonlinear, chaotic processes. Let's
compare a linear process with a nonlinear one: First, suppose the present state of a system, ψ', is a
linear function of its previous sate, ψ, for example ψ' = A + Bψ. Then knowing ψ' allows you to
compute a unique value for ψ in the past:
ψ = (ψ' – A)/B
The linear transformation ψ → ψ' is said to be reversible since it could just as easily go in the
opposite direction. Hamiltonian operators working at the quantum level are like this.
Now, suppose ψ' is a nonlinear function of ψ, for example ψ' = (ψ)2 . Here, ψ can also be expressed
as a function of ψ':
ψ = √ψ'
However, ψ doesn't have a unique value in the past because if ψ' is positive, ψ could have been
either positive or negative; it's impossible – even in principle – to tell which one it was. The
nonlinear transformation ψ → ψ' is called irreversible because it can only go in one direction: From
the past to the present. Classical processes are mostly chaotic, deterministic, yet irreversible.
In a BU, superdeterminism requires all processes to be fully reversible, so in principle you could
compute any prior state of the universe from its present state or literally travel backward in time. Is
that the kind of universe we live in? No. How do we know? Because entropy increases. How do
we know that? Because when we measure entropy, it almost always increases over time.
We must face the fact that there are two distinct worlds: 1) a quantum world where space and time
are absent, which is non-causal, indeterminate and reversible; and 2) the classical, causal world of
space-time, which is chaotically deterministic and irreversible. Drawing a distinct line between
those worlds can be difficult sometimes, but that doesn't mean the two worlds are the same.
How does the idea of free will fit in to all of this? In the quantum/classical world, space, time and
causation are nonexistent at the quantum foundation of reality; they only emerge in the classical
world of causality. I believe that Consciousness (I use the upper-case C to distinguish it from the
lower-case c consciousness that is attached to a local thinking unit called a brain) can only operate
in a region unbounded by space and time, which is similar to itself. The quantum world is where
the rubber meets the road; it is here where Consciousness is both the Observer and the Agent of free
will. Consciousness initiates “indeterminate” or “causeless” changes – not having physical causes –
at the quantum level through the action of free will. Everything else emerges from Consciousness
interacting with the quantum world.
36. Reality is Comprised of Two Separate Interconnected Spaces.
In my essay Order, Chaos and the End of Reductionism, the concepts of causal space (CS) and non-
causal space (NCS) are introduced in Appendix L and explored further in Appendix M. NCS is the
foundation of reality, which is commonly referred to as the quantum world. We live in a radically
relational universe: Things must be defined in relation to other things because there is nothing
outside or beyond the universe that can define them. Therefore, when objects are removed from
interactions with other things, they fall into NCS where everything is uncertain and where the
classical concept of causality does not apply. This is literally a timeless world. CS is the familiar,
everyday, relational world where classical objects live. Everything in CS is subject to the laws of
causality and entropy, from which the dimension of time emerges.

It is clear that time is qualitatively different than the spatial dimensions. First of all, there is very
little freedom to “move” through time like there is freedom to move through space. In fact,
movement through time is meaningless because movement is change that must be based on time.
Secondly, time is not symmetric because entropy can increase but never decrease over time. Time
emerges because of entropy, which is very closely tied to causality. CS is a 1T+3D world.
There are no physical movements as such in NCS. In the terminology of special relativity, things in
NCS are “space-like” – an impossibility in the relativistic world of CS. Things “happen” in NCS
but not in the same way as they do in CS. Those “happenings” are revealed through interactions
with CS, which tries to place them in causal sequence, but once in a while things go awry. One
famous example is an entangled Bell pair of particles that seem to “communicate” instantaneously
with each other across space. Thy do, in fact, communicate. But no classical bits of information
are exchanged. In the NCS, there are no classical bits of information because there is no entropy.
Information in NCS comes in the form of “qubits” that can be scrambled but never created nor
destroyed. It is only in CS where information in the form of classical bits are created. Although
NCS and CS are separate, communication between them does occur. In fact, it occurs continuously
via the so-called “collapse of the wave function” where qubits are transformed into classical bits.
The number of classical bits increases, causing “time” to emerge in CS.
In addition to time, there are three local dimensions of space in CS. The number three is not
arbitrary. It is mathematically determined as a direct result of Emmy Noether's theorem which
requires angular momentum to be conserved. Angular momentum is only defined in 3-dimensional
and 7-dimensional spaces, and planes of rotation can only be uniquely determined in 3-dimensional
space. So in order to have rotational symmetry, the number of spatial dimensions must equal three.
In summary, a radically relational universe must have rotational symmetry, so momentum must be
conserved, which requires the number of rotational degrees of freedom to equal the number of
dimensions, the number three. Some argue that a 3-dimensional universe exists in order to satisfy
anthropic principle. This is false. I suspect that very few or none of the “free parameters” of our
“finely-tuned” universe can actually be chosen freely. Instead, these can probably be derived from
mathematical equations based on the underlying First Principle of a radically relational universe.
I'm going out on a limb with a guess that NCS is a timeless domain with one less dimension than
CS has; a 1ω + 2D space with two spatial dimensions and time replaced by frequency. That
conjecture will be explored in much more detail in a future appendix to Order, Chaos and the End
of Reductionism. Stay tuned.
37. Mathematics Demands Non-Causal Space to Have 1ω + 2D Dimensions.
Appendix N of Order, Chaos and the End of Reductionism explored the frequency and spatial
dimensions of NCS. I found that at least two “spacelike” dimensions, ξ and ψ, are required to
accommodate the complex values of the Fourier transform F{f(t)} = α(ω) + iβ(ω). But couldn't
there just as well be three, four, or more dimensions? I think the answer is “no” and here's why.
The ξ-ψ axes can be rotated arbitrarily without changing the properties of F (ω). In a complex
number system, rotating counter-clockwise by an angle φ is the same as a multiplication by e iφ.
Multiplying two complex numbers is commutative; in other words, a ´ b = b ´ a. Multiplying two
3-dimensional numbers isn't commutative, and furthermore 3-dimensional multiplication isn't even
a closed operation because the product has a fourth dimension. Multiplying two quaternions is a
closed operation, but it isn't commutative. The order of rotations cannot be distinguished in NCS
because there are no time-ordered causal sequences in NCS; therefore, multiplication of “spatial”
values must be commutative and there are only two sets of numbers where that holds, i.e., real
numbers, ℝ, and complex numbers, ℂ. The set ℝ has too few dimensions to express F (ω), so the
“spatial” values of NCS must be contained in ℂ, and no other sets having dimensions greater or less
than two.
Although CS and NCS have different spacelike dimensions, CS and NCS must still be
complementary and codependent. Complementarity between time in CS and frequency in NCS is
accomplished through the Fourier transform and its inverse. Complementarity between the three
spatial dimensions in CS and the two “spacelike” dimensions in NCS might be accomplished
through holography; i.e., mathematically combining vectors in the ξ and ψ dimensions of NCS that
correlate with “objects” or “particles” in the x, y, and z dimensions of CS. That process may
involve the convolution function, but solving convolution integrals in two and three dimensions is
way too much math for my feeble brain to handle. So I guess I'll have to put off any further
discussion on this topic until a later time.
I suspect that the property of mass has something to do with the property of spin. Almost every
elementary particle has a spin property, which is some multiple of a quantum scalar value ħ/2. In
NCS, I believe “spin” is related to the curl of a vector field, Λ, that lies in the ξ-ψ planes. The curl,
Ñ ´ Λ, can only point in the ± ω direction, so spin acts like a bidirectional scalar quantity ± s.
Translating the spin element ħ/2 to the angular momentum vector in an extended CS requires both a
distance and velocity vector, r and v, and a scalar mass, m. In other words, the quantum-
mechanical spin property in NCS requires the existence of mass in order to satisfy the physical
requirements of 3-dimensional CS.
38. Stochasticity Is Uncaused Action of Free Will.
The flip of a coin, the roll of a die, and the spin of a roulette wheel are mistaken as “random”
outcomes. In fact, if a die were rolled over and over with the same initial position, velocity and
angular momentum, the outcome would be the same every time. With quantum uncertainty,
outcomes truly are random and unpredictable. Quantum unpredictability has the property of
stochasticity. This should not be confused with chaotic unpredictability, which is actually
deterministic at its core.
Sherlock Holmes said, “Once you eliminate the impossible, whatever remains, no matter how
improbable, must be the truth.” I'm forced to conclude that stochasticity must be uncaused action of
free will because there simply is no physical explanation for it.
Of course, free will implies the existence of consciousness. It is difficult to imagine that an electron
or a photon have consciousness, but that is the case, although it's consciousness in a very primitive
form. More of my ideas about mind, consciousness and awareness are found in my essay
Teachings from Near Death Experiences, at https://sites.google.com/site/amateurscientistessays/
39. We Live in a Goldilocks Universe for a Reason.
I'm a big fan of the Principle of Sufficient Reason, attributed to the great Gottfried Leibniz. This
principle states that if P is true, then there must be sufficient explanation for why P is true. One of
the questions that has stumped physicists in recent years is why our universe appears to be fine-
tuned to support life; i.e., a Goldilocks universe. This question is difficult only if you believe that
the laws of physics and their parameters were chosen at random instead of basing it on reason.
For example, it is widely stated that if the gravitational constant, G, were a few percentage points
larger or smaller than its current value, stars either would either burn out too quickly for complex
life to evolve on their planets, or no stars could form in first place. Both of these scenarios would
be a big problem for life. So why does G have precisely the right value to allow stars to form and
burn for billions of years? The lazy answer is to invoke the strong anthropic principle along with
the assumption that there are multiple universes, each with different physical laws and parameters;
ours just happens to have the right combinations of these laws and parameters to allow complex life
to evolve. As I stated in Conjecture 24, this answer is the ultimate intellectual cop out.
The fine-tuning question is sometimes stated as the hierarchy problem, where a huge discrepancy
apparently exists between the force of gravity and the other forces (electromagnetic, strong, and
weak). The electric force between two electrons is Fe = ke e02/d2, where d is the distance between
the electrons, e0 is the electron's charge and ke is a constant. The gravitational force between two
electrons is Fg = G me2/d2, where me is the electron's mass and G is the universal gravitational
constant. The ratio Fe/Fg = (ke/G) ´ (e0/me)2. When we substitute the numerical values of ke, G, e0
and me into that formula, we get a ratio of 4 ´1042. Now gravity and the electric force both have to
be finely-tuned for stars and chemistry to exist, so what are the odds that two randomly-selected
sets of values (ke, e0) and (G, me) would hit both of those nails on the head, given the fact that Fe is
42 orders of magnitude greater than Fg? In other words, how could G and me end up so small?
If you believe in the Principle of Sufficient Reason, then the fact we live in a Goldilocks universe is
not really a problem, and here's the reason (and it has nothing to do with “chance”): The initial
temperature of the universe was very high, around 1032 Kelvin. At that temperature, the
gravitational, electromagnetic, the strong and the weak forces all had about the same strength. All
elementary particles were pretty much all the same, whether they be electrons, photons, neutrinos,
or quarks, each with an energy somewhere around the Planck energy of 1.956 ´109 J. Because mass
and energy are equivalent, they would have weighed in at the enormous Planck mass, mp.
Substituting mp for me in the Fe/Fg formula increases that ratio by a factor of 1050, which more than
erased the factor of 10-42 discrepancy between Fe and Fg we see today. So in the initial state of the
universe, the gravitational attraction between two very heavy Planck-mass electrons actually would
have been equal to (or even millions of times greater than) the electric force between them. In other
words, gravity was initially not weak at all. The questions are where did all the mass-energy of
those heavy Planck-scale particles disappear to, and why is gravity so weak today?
Well, here's the secret: Gravity has negative energy. As the energies of elementary particles
became less positive as the universe cooled down, gravitational energy became less negative (more
positive) because particles became less massive and moved farther apart. If you're a Mind who
dreams an entire universe into existence out of literally nothing, you have to set the parameter G to
some value to ensure that negative gravitational energy completely cancels all positive mass-energy
throughout time. This value of G makes gravity appear to be “weak” today for this reason.
This example suggests that there are sufficient reasons behind all other “laws of nature” along with
values of the constants appearing in their equations. Some physicists go to great lengths to come up
with very complicated reasons, such as the Randall-Sundrum model of the universe. Wikipedia
describes it thusly: “[According to the R-S model] our universe is a five-dimensional anti-deStitter
space and the elementary particles except for the graviton are located on a (3+1) dimensional brane
or branes.” In case you're wondering if this takes care of fine-tuning with respect to gravity, I'm
afraid not: “Such models require two fine tunings; one for the value of the bulk cosmological
constant and the other for the brane tensions.” So it seems to me that this model makes fine tuning
worse instead of better. When applying the Principle of Sufficient Reason, you should always try to
keep things as simple as possible. I think the simple idea of gravity maintaining a zero-energy
balance is much better than conjuring a five-dimensional space-time, requiring two fine tunings,
and free-floating gravitons, while all other particles are stuck on three-dimensional branes.

40. Purported “Dark Matter” in Spiral Galaxies Might Just Be Ordinary Hydrogen Gas.
In Appendix S of Is Science Solving the Reality Riddle?, I calculated the orbital velocity profile of
stars in spiral galaxies, based on the assumption that most of the matter in a spiral galaxy is
comprised of a sphere of very thin, ordinary gas. If the gas is at a frigid uniform temperature and it
obeys the ideal gas law, the density of the gas decreases very rapidly over the distance from the
center of the sphere; however, the total mass of the gas continues to rise in an almost linear fashion.
This calculation produces an orbital velocity profile that closely matches the “anomalous” orbital
velocities that have been observed and are attributed to dark matter. This would imply that dark
matter should also obey the ideal gas law P = ρ RT.
As pointed out in Appendix J of the same essay, pressure is produced through the transfer of
momentum when particles collide (scatter), and this requires strong particle-to-particle forces. With
neutral gas molecules, the electron shells of molecules strongly repel each other as they approach
each other, causing scattering with momentum transfers that manifest as pressure. By definition,
dark matter is subject to gravity alone, which is far too weak on the atomic scale to produce any
scattering or pressure. With no particle-to-particle forces capable of producing internal pressure, it
is simply not possible for gravity to hold a stable cloud of dark matter in place.
So is it plausible that the missing mass in spiral galaxies could be nothing more than ordinary gas?
I believe it could be. According to the literature, up to 90% of the total mass of a spiral galaxy is
unseen or missing. The velocities of orbiting stars near the outer edge of the disk are the same
irrespective of the distance from the center – the so-called anomaly. This would occur if the
missing mass were in the form of a spherical halo whose mass is distributed in a particular way due
to obeying the ideal gas law. Only ordinary matter forms a gas that behaves that way, so let's do a
simple back-of-the-envelope calculation find out what the properties of such a halo are.
Estimates vary, but mathematical models suggest that the missing mass of the Milky Way is
somewhere around 10 12 solar masses or 2 ´ 10 42 kg. It should be remembered that the hot, visible
disk containing stars, dust, and other visible matter is razor-thin, whereas the halo of missing mass
extends 360° in all directions, and most of it is far away from the hot stuff. Thus, most of the gas in
the halo should be at the mean temperature of the universe, 2.7 K, or just above absolute zero. If the
effective radius of the halo is twice the radius of the visible disk, 9.46 ´ 10 17 km, then its volume is
4/3 π (9.46 ´ 10 17 km)3 = 3.55 ´ 10 54 km3.
The mean density of the halo ρ̃ = 2 ´ 10 42 kg / 3.55 ´ 10 54 km3 = 5.63 ´ 10 -13 kg/km3. If the halo is
composed of molecular hydrogen, weighing 2.106 ´ 10 -3 kg/mole, each cubic kilometer of the halo
would contain 5.63 ´ 10 -13 / 2.106 ´ 10 -3 = 2.67 ´ 10 -10 moles of hydrogen, or 1.608 ´ 10 14
molecules. That may seem like a lot of molecules, but one cubic kilometer is an awfully big
volume. A 20 ml sample of this rarefied gas, about the size of a small test tube, would contain only
about three hydrogen molecules.
To gain some more perspective on just how rarefied this gas really is, the mean free path between
collisions of one hydrogen molecule with another molecule is 1.76 ´ 108 km! As great as this
distance seems, it's only 0.0000000093% of the halo's diameter. At a temperature of 2.7 K, each
hydrogen molecule would collide with another molecule every 30 years on average. In comparison,
a hydrogen molecule at 20°C in a container of hydrogen at atmospheric pressure would experience
1.7 billion collisions per second.
Would it be possible for three molecules of hydrogen at 2.7 K to get lost inside a 20 ml test tube?
I'd think it's quite possible, which is why I also think it's quite possible that 2 ´ 10 42 kg of frigid
hydrogen within a vast sphere 200,000 light years in diameter could go unnoticed, other than a
strange gravitational anomaly affecting the orbital velocities of stars.
41. Current Theories of Galaxy Formation Are Incomplete.
Continuing with the discussion of missing matter in spiral galaxies from the previous conjecture, it
appears that current theories of galaxy formation tend to put the cart before the horse. Thinking
about this further, it's clear to me that there are only two types of gravitationally-bound structures
that are stable: 1) objects orbiting around a very large central mass, or 2) a collection of particles
producing internal pressures that restrain inward gravitational forces.
Our solar system is an example of the first type, where the Sun represents 99.8% to 99.9% of the
total mass of the system. Two objects of roughly the same size can orbit around their common
center of mass, but if a third object is added, the system tends to be unstable. Adding more objects
makes the system chaotic. If the Sun's mass were reduced to that of a large planet, there would be
no way to form a stable orbital structure consisting only of planets. The stars in the Milky Way are
orbiting around an object in the center of the disk, which some say is a black hole. This central
object weighs about one million solar masses, but there is no possibility that 100 billion stars in the
Milky Way could be in stable orbits around such a measly little mass.
A cloud or halo of particles obeying the ideal gas law is an example of the second type. That cloud
will be roughly spherical, although it would tend to flatten out into an ellipsoid if it rotates. If the
cloud is disturbed, it will quickly reform into a stable sphere or ellipsoid again, much like a soap
bubble after it is momentarily disturbed. The pressure and density are at peak values in the center
of the cloud, but these rapidly trail off toward the outer surface. (Refer to Appendix S of my essay
Is Science Solving the Reality Riddle? for further details.) Only after very large halo has formed
can compact objects, such as stars and their planets, orbit around the halo's center of mass. The
reason is simple: Stars and planets simply gravitate toward one another, but they don't produce any
back pressure so they cannot form stable spherical or ellipsoid structures on their own.
This is how I envision galaxies form: The universe was filled with gas molecules (mostly
hydrogen) that were more or less evenly distributed throughout space, with local concentrations of
mass that made the distribution unstable. The local concentrations began to separate from each
other and these eventually collapsed into stable halos. All of the halos were rotating with varying
degrees of angular momentum, which flattened them into ellipsoids of various shapes. As the halos
collapsed, their internal pressures increased with the central regions comprising millions of solar
masses. The collapse of the central regions accelerated, forming objects know as quasars. I believe
those highly-luminous quasars are eternally collapsing objects (ECOs), described in my essay Why
There Are No True Black Holes. Initially, quasars are exceedingly violent, but their ultimate
collapse results in extreme gravitational red-shifting, making them appear as “black holes” to
distant observers. The central “black hole” in our Milky Way was once a violent quasar that
continues its collapse as a massive ECO.
The extreme outward radiation pressure from the central quasar/ECO would have hollowed out the
interior of the collapsing halo. Once the quasar settled down as a highly red-shifted ECO, gas
would fill back in. Matter collects into smaller clouds and star systems throughout the halo. If the
halo is spinning, these objects will tend to gravitate toward the central plane of rotation, forming a
disk. Any matter that isn't a gas that produces back pressure tends to collect in that disk where
matter is very highly-concentrated compared to the bulk of the halo, which is a vanishingly rarefied
gas containing roughly three hydrogen molecules per 20 ml of volume. Nevertheless, the thin,
frigid gas in the halo contains ten times as much mass as the hot visible disk.
Not all galaxies are disk-shaped. When a star system forms in the halo of a galaxy that is hardly
spinning, it will tend to sink toward the halo's center of mass under gravitation. If it passes close to
the center of mass (where a super-massive ECO probably lives), it will form a highly elongated
orbit around the center of mass. Billions of stars in such a galaxy will all follow their own
elongated orbits in random directions like a swarm of bees, forming what is known as an elliptical
galaxy. It might be possible for two non-spinning elliptical galaxies to “collide” and merge into a
rapidly-spinning conglomerate that settles down into a large stable disk galaxy, but this is highly
speculative.
42. The Kalam Cosmological Argument Is Fatally Flawed.
The Kalam Cosmological Argument (KCA) attempts to prove the existence of the universe has a
cause, and by extension a personal Deity exists. Kalam is an Arabic word that means discourse,
making the argument sound like it originated in the Bayt al-Hikma (the medieval House of Wisdom
in Baghdad during the Golden Age of Islam). In reality, the KCA was invented in 1979, not long
after cosmologists came to a consensus about the universe starting out in a big bang. Part A of the
KCA is a piece of pseudo logic dressed up as a syllogism like this: “All men are mortals; Aristotle
is a man; therefore, Aristotle is a mortal.” There are three elements of Part A:
1. Everything that begins to exist has a cause for its existence;
2. The universe began to exist; therefore,
3. There is a cause of the existence of the universe.
Most people don't see why this is not a valid syllogism. A syllogism identifies a set of objects with
a known property that applies to all members of that set. “All men are mortals” is self-evident; the
property of mortality applies to every member of the set of men. Thus, a particular member of that
set, a man named Aristotle, must share this common property. The KCA claims there is a set of
objects that have the property of “beginning to exist” and all such objects have causes, but there are
no examples of objects like these to test this claim. Physical things don't just pop into existence;
they change from one form into another. Thus, “everything that begins to exist” is an empty set.
Furthermore, syllogisms can't be applied to the universe in the first place since the universe cannot
be a part of some larger set. This reduces Part A of the KCA to just its last element, “There is a
cause of the existence of the universe,” which is nothing more than a bold, unproven claim.
The KCA misconstrues what cosmologists actually agree upon. To say the universe began to exist
implies that there was a time, say t minus 10, when the universe did not exist. There was no
external reference frame that counted down to t = 0 when the universe began to exist. Time
measures change. In the absence of a changing universe, there simply is no time. Without time, t
minus X is meaningless, so there was no beginning of existence. (See Conjecture 21.)
What cosmologists do agree on, is that the universe was once in a very hot, dense state. Some (but
not all) cosmologists believe this state was a singularity, a point of infinite curvature having infinite
density. I don't believe singularities are possible, but in any case, a singularity certainly isn't
“nothing.” Therefore, we can't conclude that the universe sprang into being from nothing. It clearly
emerged in its present state from something else, which destroys the basis of the KCA.
The KCA takes a leap from Part A to Part B, which makes the claim that a personal Creator exists,
Who is endowed with all sorts of attributes, such as infinite power, goodness, intelligence, etc.
Theists and non-theists get bogged down in endless debates with each other about Part B, but my
point is that Part B is irrelevant because Part A is fatally flawed.
Although I reject the claim that the universe has a cause, I'm still on board with causation and the
principle of sufficient reason. If a baseball is thrown hard enough at a glass window, the window
will break, and the cause of breakage is the thrown baseball. But the broken window did not begin
to exist. It was transformed from an unbroken window into a broken one. That's the way causation
works. The principle of sufficient reason states that if a thrown baseball causes a window to break,
there must be a sufficient reason why thrown baseballs break windows. There is a big difference
between explaining the transformation of something as cause and effect, and the underlying reason
why a cause produces an effect. Sufficient reason runs much deeper than simple cause and effect.
43. Gravity Resists Efforts to Quantize It Because Gravity Isn't a Force.
Scientific literature refers to gravity as one of “four forces of nature,” while physicists lament over
their inability to “unify” gravity with the “other forces.” Therein lies the problem. The other forces
have their force carriers: Electromagnetism has the photon, the weak force has its W +, W – and Z 0
bosons, and the strong force has a menagerie of eight gluons. Physicists have tentatively identified
the force carrier for gravity: a massless, spin-2 particle known as the graviton. String theorists are
encouraged by the fact that such a particle seems to pop out of their string equations. But Einstein's
breakthrough in generalizing relativity over 100 years ago came with the realization that gravity
isn't a force at all. This raises the question of why physicists expect to find a force carrier for a
non-existent force. It seems to be a classic case of cognitive dissonance from my perspective.
A “good” theory will do three things: 1) It will supplant an existing theory, with the older being a
special case of the newer, 2) it will explain observations that no other existing theories can explain
unless they introduce additional free parameters, and 3) it will make testable predictions that no
other existing theories can make. Einstein's general relativity is a “good” theory that does all three
of these things. The former theory – Newton's – is a special low-energy approximation to GR, and
GR makes predictions that cannot be duplicated using Newton's laws. Unfortunately, more accurate
and precise observations tend to overtake even the most beautiful and elegant of theories, which is
exactly the case today with general relativity. There are serious deviations from standard
gravitational theory being observed at cosmological scales. This has led to the inclusion of some
highly-questionable free parameters into Einstein's theory, the so-called “dark matter” and “dark
energy” free parameters; the latter being a resuscitated version of Einstein's cosmological constant,
which he considered the biggest mistake of his career. The revived cosmological constant with its
very tiny, yet non-zero value, has resurrected a quasi-religious “fine tuning” argument used to
support various untestable multiverse hypotheses (See Conjecture 42).
I'm convinced that whatever gravity is, it must be an emergent phenomenon based on entropy.
Several years ago, Erik Verlinde published papers presenting a very preliminary version of a theory
of entropic gravity and inertia. The physics establishment rewarded his efforts with derision and
ridicule. Since then, Verlinde courageously developed his entropic theory further into a full-blown
alternative version of general relativity. The new theory accomplishes all three things a “good”
theory should do. It has made some tentative predictions that are very exciting, while eliminating
the need for both “dark matter” and “dark energy” in order to explain several of the cosmological
anomalies that recent astronomical observations have revealed.
In a presentation entitled “Entropy of Spacetime and Gravity,” T. Padmanabhan compared space-
time to a physical solid. It's not that space-time actually is a solid, but the two share properties such
as pressure, shear force, and deformation. He wrote down a generic GR metric of a symmetrical
spherical volume: ds2 = f (r) dt2 – f (r) –1 dr2 – r2 (dθ2 + sin2 θ dφ2). He then demonstrated that the
metric at the volume's horizon produces the equation for entropy: T dS = dE + P dV.
A horizon has both temperature, T = ħ c3 / (8 π kB G M) and entropy S = 4π kB G M 2 / (ħ c). In other
words, horizons have actual thermodynamic properties that interestingly combine four fundamental
parameters of nature: Newton's gravitational constant, G; Boltzmann's constant, kB; the Planck
constant, ħ; and the speed of light, c. I believe Boltzmann's constant is the key to harmonizing, if
not unifying, gravity with the “forces of nature.” Since entropy and information are synonymous,
this ties a very nice ribbon around an entropically-modified version of Einstein's general theory of
relativity, harmonized with the quantum-mechanical concept of a holographic universe.
My latest conjecture is: a) gravity is how the universe communicates with itself, communication as
the transfer of information-entropy, b) gravity emerges along with entropy, with horizons as the
repositories of information, c) the universe has an overall “objective” to maximize entropy at every
opportunity, d) space-time geodesics are essentially paths objects follow in order to maximize
entropy, and e) time equals entropy flux, which is always positive and is irreversible.
44. General Relativity Is a Weak-Field Approximation of a More Fundamental Law.
Expanding on Conjecture 43, I'm proposing that Einstein's field equations represent a weak-field
approximation of a more fundamental universal law based on entropy. Recognizing gravitation as
defining geodesic that maximize total entropy instead of a simple “attractive force” between two
bodies, it might be possible for gravity to act repulsively under certain circumstances. Andrew
Thomas proposes such a possibility in his book “Hidden In Plain Sight 2: The Equation of the
Universe.” According to Thomas's hypothesis, our universe is the interior of a “black hole” with an
enormous radius – greater than 10 27 meters – far greater than the distance from Earth to the most
distant observable object. Whereas gravity operates as an “attractive force” on small scales,
Thomas thinks it acts as a “repulsive force” on cosmological scales, driving all matter in the
universe “outward” toward the universe's event horizon.
A similar idea is put forward in a paper Gravitational Repulsion within a Black-Hole using the
Stueckelberg Quantum Formalism, submitted by D. M. Ludwin and L. P. Horwitz. The authors use
quantum-mechanical arguments to show that “time” inside the event horizon of a black hole is
physically different than “time” outside the event horizon. Practitioners of orthodox black holology
preach that all objects passing through the event horizon gravitate toward a singularity lurking at the
center of the black hole. But Ludwin and Horwitz claim that any matter found inside would move
outward toward the event horizon, as in Thomas's universe.
I think what these two examples illustrate is that gravitation could behave in two diametrically
opposite ways. The question is: Why? I think the answer could be that gravitation is simply a
means to an end, that end being achieving maximal entropy. In a weak-field environment, gravity
always behaves as an “attractive force” as Einstein's equations predict, and in such an environment
a pair of free-falling objects always follow geodesics through space-time giving the appearance of
mutual attraction. Noting that entropy is inversely proportional to space-time curvature, it would
seem reasonable to speculate there is a cross-over point where the curvature becomes extreme.
Beyond the cross-over point, two converging geodesics could force local space-time curvature to
decrease the overall entropy of the universe of increasing it. In that case, the goal of achieving
maximal entropy might be better served by having the geodesics diverge from each other, giving the
appearance of “repulsive gravity” between the objects.
The preferred end state of the universe is absolute flatness because this represents maximal entropy.
Thus on cosmological scales, matter would tend to become as dilute and spread-out as possible,
causing geodesics of large-scale objects, like galaxies, to diverge. In other words, maximizing
entropy is accomplished through the free expansion of the universe, whereas this can be achieved
on smaller scales by having geodesics converge; thus, the same “gravitational force” can act in
opposite ways.
It's interesting that the Einstein-Infeld-Hoffman equations hint at this possibility. For the simple
case of a single stationary large mass, M, the acceleration of a small test mass toward the large mass
reduces to: – d2 r / dt2 = G M / r2 – 4 G2 M2 / r3 c2 + 5 G M (v2/c2) / r2 , where v is the velocity of the
test mass. If v << c, the last term drops out with the remaining two terms having opposite signs.
This allows the possibility of a test mass accelerating away from M when r < 4 G M / c2. In all
fairness, the E-I-H equations are categorically meant to be weak-field approximations, whereas the
condition r < 4 G M / c2 could only occur when strong gravity is present; nonetheless, I still find it
intriguing that negative gravity emerges from those equations when space-time curvature is large.
You don't need to invoke weak-field equations to show repulsive gravity occurs when curvature is
extreme, however. In Chapter 6.7 of “Reflections on Relativity” by Kevin Brown, the standard
Schwarzschild metric is used to show that a test mass released near a black hole decelerates in a
certain region during its descent, as if gravity were pushing it outward instead of pulling it inward!
45. The Universe Is Spatially Flat but Temporally Curved.
Cosmologists say our universe is too flat for its own good, referring to it as “the flatness problem.”
When they make careful measurements, they are unable to detect any signs of curvature, and it's a
problem because according to the standard cosmological model, the universe is currently on the
very precipice of being closed, flat, and open. This would require an initial density that was fine-
tuned within a precision of one part in 1062 and they feel that's just too improbable to be true.
Luckily for the cosmologists, Alan Guth solved their “problem” by inventing inflation.
I have a different take on the whole matter. It is true that the universe displays no curvature in any
direction across space. The reason is quite simple: Any measurable spatial curvature would point
in a particular direction toward the center of curvature, thus establishing a spatial frame of
reference; however, we live in a radically relativistic universe with no preferred directions at all.
Extending a straight line in any direction always takes you straight back to “The Beginning.”
This does not mean there is no curvature, however. Because the universe expands, it's definitely
curved away from the “The Beginning” in the time direction. In other words, the universe must be
spatially flat, but temporally curved. Using electromagnetism collected by antennas, we can only
observe the universe as it existed after 377,000 years from “The Beginning” since space was opaque
to electromagnetism prior to that time. In any case, we could never observe the “The Beginning”
itself using any physical means because it's receding from us at the speed of light. This would red-
any shift information it into oblivion. But you can be rest assured that “The Beginning” really is
still out there, about 13.8 Billion light years away in any direction you choose to point.
At every moment in time tU since “The Beginning” there is a parameter of curvature 1/R2, which
you could interpret R as the physical radius of the universe. It turns out that dR / dtU = c, the speed
of light, but although the “The Beginning” appears to recede from us at the speed of light, the truth
is we're receding from it. Distant objects also recede from “The Beginning” at the speed of light in
their own reference frames, but their speeds of recession are reduced as observed from our reference
frame, making it seem they are receding from us, à la Hubbles' law. For details of how all this
works, see Appendix W of my essay “Order, Chaos and the End of Reductionism.”
The bottom line is the universe is highly asymmetric with respect to time. Temporal curvature is
proportional to 1/tU2, and even simply observing distant objects through telescopes makes it
abundantly clear the early universe was vastly different than the universe in our immediate temporal
neighborhood. My “post-reductionist universal law” requires that all changes must maximize total
entropy/information, meaning the universe is always maximally entropic. Bekenstein's bound has
two equations defining maximal entropy/information either in terms of R and total mass-energy, EU,
or in terms of the area of curvature, A = 4 π R2. By setting those two equations equal to each other,
we obtain the relationship R = 2 G EU / c4. Substituting EU = MU c2 → R = 2 G MU / c2, which means
R is the Schwarzschild radius of MU. Points having the same radius of curvature lie on a “surface”
that could be described as a “sphere” with an area A, but not in the same sense as a physical sphere
in 3-dimensional space. Each “sphere” represents a separate moment in time, tU.
Paul Dirac proposed the large-number hypothesis (LMH) in the late 1930s. Most scientists (Arthur
Eddington being one notable exception) either ignored him or dismissed his work as numerology.
At any rate, Dirac concluded from the LMH that EU is proportional to tU2, and the gravitational
“constant” is proportional to 1/tU, in which case R = 2 G EU / c4 ∝ tU makes sense and connects
cosmology with Bekenstein's theory about entropy/information.
The law of conservation of mass-energy is one of the most cherished principles in all of science.
The relationship EU ∝ tU2 throws that law completely out the window; but bear in mind that EU
would change very little over intervals that are brief compared to cosmological time, such as the
time from the dawn of the human species until now, so this cherished law has held up pretty well
over the past few centuries at least. We really shouldn't be too surprised if energy isn't conserved
because Emmy Noether's theorem proves this law depends entirely on continuous time translational
symmetry (time being flat), which we now know isn't the case.
46. Time Is the Indispensable Element that Ties Physical Elements Together.
Appendix Y of my essay Order Chaos and the End of Reductionism introduced a model of the
universe centering on the reality of time and its essential nature in tying together the physical
elements of the universe. The model proposed the following relationships.
G ∝ T ∝ tU– 1 R ∝ tU A ∝ EU ∝ tU2 IU ∝ tU3
G is Newton's gravitational parameter, T is absolute temperature, R is the radius of temporal
curvature (commonly considered to be the radius of an expanding universe), A is the area of the
“surface” connecting every point at a given moment in time, tU, EU is the total mass-energy, and IU
is total information. The model is derived from the post-reductionist universal law, which states
that every change maximizes the total degrees of freedom (along with entropy-information) of the
universe. This model is a radical departure from the standard cosmological model presented in
popular science literature in that the gravitational “constant” is no longer considered a constant, but
diminishes over cosmological time. The Planck area, which is a fundamental constant according to
orthodox reductionism, must be proportional to tU– 1 because it's proportional to G.
Unlike the current theory of holographic universe, the total information encoded on the “hologram”
is proportional to volume of the universe, increasing at the rate tU3, instead of area. This is because
this volume includes the entire history of the universe, all of which must be encoded in the present
moment if it is to be accessible. But the present moment is represented as an expanding surface
with an area proportional to tU2; therefore, the information density (the number of bits per unit area)
must increase as well. Since four units of Planck area are needed to encode one bit of information,
the Planck area must shrink relative to a unit of an area based on square length.
There is an equivalence between mass-energy and entropy-information – it equals bit – and this
equivalence depends on the fact that the cosmological temperature is proportional to tU– 1. This
temperature is currently 2.73 K, as revealed in the temperature of the so-called cosmic microwave
background, which is actually the combined microwave radiation emitted from the background,
foreground and in-between ground, which are all at the same temperature after taking into account
the cosmological red shifts. The reason for the temperature dependence is because the mass-energy
released by the erasure of one bit of information is proportional to temperature, and so that
temperature must be proportional to tU– 1 if mass-energy is proportional to tU2 and information is
proportional to tU3.
Note that the elements G, T, R, A, EU, and IU are not necessarily numerically proportional to each
other, but they are connected together through their relationships with time.
47. Time Is the Primary Organizing Principle; Space Is Derived from It.
Out of habit, humans organize the universe spatially. We assume that there are places other than
“here” that share the moment of “now.” Well, try this thought experiment: Imagine a distant place
that shares the present moment with you. Now try to point in the direction where that place can be
found. You can't. No matter which direction you point, you are pointing toward the past and not
toward any place in the present moment. Everything that has ever happened in the past has been
encoded into the present moment. Looking out into “space” from our present “location,” we see
history organized into a particular time line. From alternate locations in “space,” we can see history
organized into alternate time lines, but space itself can be neither measured nor observed. We
cannot find our location in space or tell how fast we're moving through it, if at all. If no forces are
acting on an observer to cause acceleration, no spatial curvature can be observed. Space is flat
without any preferred direction. Because space is absolutely featureless and symmetrical, it is
unobservable; therefore, space is not real and must be thought of as a mathematical abstraction. On
the other hand, time is asymmetric and curved, and clocks can measure its decreasing curvature;
time is observable and therefore it is real.

48. Mind Interacts with Matter and Energy.


Conjecture 16 above states that consciousness (mind) emerges from material complexity. I'm
forced to rethink that position in light of a recent experiment conducted by Shoichi Toyabe of Chuo
University. In brief, the experiment converted pure information into energy, in perfect agreement
with the rate predicted by Leó Szilárd in the equation E = k T Ln 2 per bit. Based on this result, I'm
now inclined to think that mind is independent of matter and could be the primary instead of the
secondary component of that duality. The experiment is described in detail in this pdf file.
Physicists seem to be divided into two camps. The materialists or realists believe that space, time,
matter and energy are real and that consciousness or mind are illusory. The idealists take the
opposite view that consciousness is primary and the entire material universe is derived from it.
Most theoretical physicists, especially those engaged in particle physics and string theory, fall into
the first camp. The latter view is shared by notable physicists of the past, such as Erwin
Schrödinger and John Wheeler. My own thoughts on this matter were influenced by the following
thought experiment.
Today is Friday, October 6, 2017. There is no doubt or uncertainty in my mind concerning which
day of the week it is. Because of my certainty, if someone were to mention that today is Friday, that
there would be absolutely no information associated with that statement. This comes directly from
Claude Shannon's definition of information: S = – Σ pk Log2 pk , where pk are the probabilities of
specific outcomes. In the case of a seven-day week, pk = 0 for every day except Friday, which has a
probability equal to 1. Thus, S = – 6 ´ 0 ´ Log 2 0 – 1 ´ Log 2 1 = 0. Thus, zero uncertainty
amounts to zero information. Now suppose I had sustained a head injury that rendered me
unconscious for several days. Upon regaining consciousness, I have no recollection of when the
head injury occurred or any idea of what day of the week it is. My total uncertainty regarding the
day of the week makes all seven pk values equal to 1/7, and this level of uncertainty is associated
with an amount of information equal to S = – Log 2 1/7 = 2.8 bits. Now if someone informs me
today is Friday, all my uncertainty would vanish, along with the information associated with it.
Essentially, 2.8 bits of information from not knowing what day it is would have been erased from
my consciousness.
At this stage, it’s important to note there is a subtle difference between entropy and information,
although I tend to use the terms interchangeably. Shannon’s definition of information is identical to
Boltzmann’s thermodynamic entropy when all pk are equal because – Log pk = Log 1/pk = Log Ω.
In this condition, information is maximized. Thus, entropy is the maximal amount of information
that is possible, which reflects the maximal amount of uncertainty. According to the equipartition
theorem, the total energy of a physical system in equilibrium will be equally divided among the
various degrees of freedom. The underlying reason for this is because evenly dividing the energy
also evenly divides the probabilities of all possible states, thus maximizing the amount of
uncertainty concerning the actual state of the system. Nature seems to crave uncertainty and will
try to increase it at every turn.
Now most people, including scientists unfortunately, confuse information with data. When
somebody says, “I’m emailing you the information you wanted,” they are really sending data and
not information. Information measures the uncertainty on the receiving end of communication.
Data is what removes the uncertainty. Both data and information are measured in bits, and N bits of
data remove N bits of uncertainty.
A person who doesn't know the truth that information = entropy would assume pinning down what
day of the week it is should have increased the amount of information; however, in reality the
opposite is true. Information and uncertainty go hand-in-hand. Because 2.8 bits of information
have been literally erased from my consciousness, there appears to be a serious violation of the
second law of thermodynamics that states entropy (information) cannot be destroyed. So in order
for information to be erased from my consciousness, either information/entropy must increase
somewhere else in the same system, or information must be converted to an equivalent amount of
energy, as demonstrated by the Toyabe experiment. The most likely way for information → energy
conversion to take place is to release 2.8 bits worth of information in the form of heat inside the
brain. In other words, clearing up my confused state of mind literally releases a tiny amount of heat
energy into the environment! According to the Szilárd equation for T = 98.6 °F = 310 K, the energy
released would be around 8.3 ´ 10 –21 joule. As heat, this energy increases the entropy of my brain
tissue and the surrounding environment, so ultimately the entropy of the entire universe increases to
make up for the information erased by clearing up my mind.
Now a materialist/realist would argue that the entire thinking process described above is nothing
more than changes in the electrochemical state of the brain, which naturally will produce heat. I
don't deny for a moment that my becoming aware of the day of the week would produce
electrochemical changes in the brain. What I'm saying is that my initial uncertainty about the day of
the week wasn't a physical “thing” at all – it was purely mental. And yet removing this mental, non-
physical uncertainty somehow produces measurable physical effects, as the Toyabe experiment
proves. This makes me wonder if the entire physical universe could be the manifestation of a
mental process, as the idealists claim is true.
This also partially clears up the mystery of wave function collapse. When an electron is isolated
from the universe, it becomes a wave function, an immaterial, probabilistic entity, creating
uncertainty and information. An electron has been converted into information. When the wave
function “collapses,” uncertainty is reduced; information is erased and is converted into an electron.

49. Physics, Cosmology and Phenomenology Are at Odds with Each Other.

Physics, cosmology and phenomenology (a philosophical movement that describes the formal
structure of the objects of awareness and of awareness itself) are currently in a state of disharmony.
In the field of physics, attempts at unifying quantum mechanics and general relativity have led only
to dead ends. (The fundamental fact that gravity is not a true force, as proved by Albert Einstein,
means its illusive force carrier, the graviton, does not exist. There is no way to quantize gravity as a
force if it isn't a force to begin with.)

Physics in its current state does not support what we humans observe of the cosmos (the home we
live in) without the introduction of additional “stuff” such as dark matter and dark energy, which are
at odds with particle physics as it is understood. Neither physics nor cosmology – either separately
or in combination with each other – can support a coherent theory of phenomenology. Thus, those
who embrace material reductionism tend to either trivialize consciousness as a byproduct of brain
waves or dismiss consciousness altogether as a delusion (which evades the question of who or what
is experiencing this delusion).

If physics, cosmology and phenomenology all purport to be compatible with the same reality, they
should ideally be in harmony with each other, but this is clearly not the case. I believe the origin of
this disharmony is our lack of understanding of reality. We labor under the illusion that the universe
is a multidimensional bulk object, described as a four-dimensional space-time continuum in “tech
speak.” I've discovered that the “universe” is fundamentally an asymmetrical temporal dimension
having an event horizon expanding at the speed of light with respect to a symmetrical spatial
dimension that possesses three degrees of rotational freedom, which is misinterpreted as three
dimensions. The only thing that is “real” is the Here and Now on the expanding temporal event
horizon. All that can be experienced through the physical senses must be experienced in the Here
and Now. Each conscious being occupies a separate “spot” on the expanding event horizon that
observes reality from a slightly different perspective. Objects that appear to occupy volumes of
space in the distance are residues left behind by the expanding event horizon in the past. The
following model helps illustrate this.

Imagine being a member of a crowd of people stretched out in a straight line who are all walking
backward in the snow, and imagine your field of vision is limited to a pair of thin slits oriented 45°
to your left and 45° to your right. All you can see through your limited field of vision are trails of
footprints in the snow that were left by other people in the moving line. This is precisely the kind
of limited view of the universe we have through our sense of sight. We misinterpret the footprints
as dynamic objects that exist in their own right instead of static images left behind by a dynamic
present. This leads to a completely false cosmology that is disconnected from physics and reality.

Another common error is considering ourselves as spectators looking at the universe as exterior
observers instead of conscious beings who are completely integrated with the universe. We must
get over this habit of objectifying the universe if we are to harmonize phenomenology with physics
and cosmology.

50. The Problem of Mind-Body Dualism Is Solved.

Mind-body dualism is a famous problem in philosophy. René Descartes, among others, have
defended the argument that mind and body were made up of two entirely different “substances.”
The problem arises when trying to explain the mechanism that allows an immaterial “substance”
(the mind) to control the action of a material body. That question tends to drive philosophers into
two opposing camps: The materialist camp argues that every observable phenomenon is entirely
physical; i.e., there is no such thing as consciousness. The idealism camp argues the opposite; i.e.,
the physical universe we experience as matter and energy is created by consciousness itself. These
positions “solve” the dualism problem by arguing that one or the other part of the dualism doesn't
even exist. In my view, both materialism and idealism are untenable positions, and I'll explain why.

According to materialism, the brain is a very complex and sophisticated machine that carries out all
of its functions via electrical signals. There simply is no way for any external influence (other than
electrical inputs) to affect what the brain does, and having the experience of being conscious is
therefore just an illusion. The problem with this argument is that it cannot explain how a non-
conscious brain can be “fooled” into believing it's conscious. Can a mere machine, regardless of its
level of complexity, really experience anything? Or is the experience of consciousness a real and
emergent property of complexity? It was Descartes, after all, who said, “Cogito ergo sum.”
Consciousness is the only thing that any of us can be absolutely certain is real.

On the flip side, idealists argue that matter and energy are manifestations of consciousness, as in a
dream, and what passes for objective reality is a product of a Cosmic Thought of a Universal Mind.
This is the “it from bit” conjecture taken to an extreme level, but the problem with taking it to an
extreme level is that it ultimately leads to solipsism. Although solipsism isn't necessarily false, it
raises a dilemma: If everything and everyone I experience are just manifestations of my own mind,
exactly whom am I discussing mind-body dualism with and why should I even bother discussing it?
Solipsism quickly turns into a one-way trip to nihilism.

The solution to the mind-body dualism was provided in Conjecture 48, above. 1) If we affirm
consciousness (mind) as being comprised of pure information, and 2) since there are sufficient
reasons to accept that information can be converted into energy (and vice versa), then Descarte's
basic premise is false: Mind, matter and energy are not completely separate things, but are actually
different manifestations of one entity underlying all three. Remember that there was a time in the
not-too-distant past when science refused to accept matter and energy as being interchangeable;
instead, these were considered completely different entities governed by separate conservation laws.
Soon after Albert Einstein's special theory of relativity was published in 1905, e = m c2 became
accepted as a scientific fact while information (which remained ill-defined until Claude Shannon's
work in the 1940s) stood alone as a separate, mysterious entity.

Matter ↔ Energy Information

Leó Szilárd proposed an equivalence between energy and information in 1929, but this was not
experimentally confirmed until 2010 by Toyabe et al. Unfortunately, those ground-breaking
experiments have been largely ignored by the scientific community and I believe that's because the
results seem to violate the law of conservation of energy. But this isn't necessarily the case. Even
when information/entropy increases over time, all we have to do with the existing conservation law
is to replace {mass + energy} with {mass + energy + information}. Any increase in entropy of a
closed system would then be accompanied by a corresponding decrease in mass + energy, which
would be undetectable in most cases since the quantity (kB T Ln 2) is so small. Therefore, I claim
Szilárd's equation must be taken literally, just as e = m c2 is taken literally, and I am confident the
following relationship will be eventually proven as a scientific fact and become the new paradigm.

Matter ↔ Energy ↔ Information

The above relationship provides the missing link between mind and matter and neatly resolves the
problem of mind-body dualism that has confounded philosophers and scientists alike for centuries.
By the way, you may not agree with what I just said, but I still want you to remember that I said
it. ;-)

51. The “Measurement Problem” Is Solved by Abandoning the Linearity Assumption.

One of the persistent problems plaguing quantum mechanics is the so-called Measurement Problem.
I don’t ordinarily Although I don’t ordinarily use Wikipedia as a reference because I find
information in it somewhat unreliable, but I will use the following quote from it anyway.

“The measurement problem in quantum mechanics is the problem of how (or whether) wave function collapse
occurs. The inability to observe this process directly has given rise to different interpretations of quantum
mechanics, and poses a key set of questions that each interpretation must answer. The wave function in
quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition
of different states, but actual measurements always find the physical system in a definite state.” [emphasis
mine]

I believe the problem is entirely due to the assumption I highlighted in bold type above, i.e., that the
Schrödinger equation is a linear superposition of different states. This assumption works very
well for small, low-energy systems like the double-slit experiment superposing two wave functions.
But what happens when 1023 wave functions are superposed at room temperature? This is precisely
the situation that arises when making quantum measurements using laboratory instruments.

Sitting here looking at my glass coffee mug, it struck me that the Measurement Problem is related to
the question of the persistence of macroscopic objects. If my coffee mug is a linear superposition of
quantum wave functions, why should these wave functions keep collapsing into a coffee-mug shape
instead of transmogrifying into an ashtray or a wine glass? I think the answer is simple: The coffee
mug is not linear superposition, but rather a non-linear superposition. A non-linear form of the
Schrödinger equation must be applied to systems that attain a certain size or energy level, creating a
feedback loop which causes chaotic behavior. Instead of spreading out uniformly though state
space, chaotic systems are characterized by states that clump together around regions called strange
attractors. The states will continue to surround the strange attractor until the system is sufficiently
disturbed, which is why my coffee mug will remain mug-like for the foreseeable future.

When measuring a quantum state in a laboratory, such as an up/down spin of a single electron, it is
wrong to think of the measuring apparatus as a simple two-state object like the electron. The
apparatus is a classical object with a countless number of quantum states existing in a non-linear
superposition. When a measurement of an electron is made, the electron’s wave function is merged
with the myriad superposed wave functions of the apparatus, with the electron’s (superposed)
magnetic moments greatly amplified. This causes the myriad states of the apparatus to bifurcate
into two separate sets that are classically incompatible and mutually exclusive. The first set of
states surrounds a positive/spin-up attractor and the second set surrounds a negative/spin-down
attractor. An apparent “choice” is made as to which set prevails. Whether the choice is made by the
electron, the apparatus, or both is immaterial, because the result is completely random and non-
computable. (We know this is true because the statistics violate Bell’s Inequality.)
Applying linear superposition to the measurement process doesn’t work because it requires an
arbitrary, unexplained “collapse” of the electron’s wave function. Some very creative alternatives
have been proposed in order to explain the “collapse” or get around it altogether. These include
John Wheeler’s Participatory Universe, where human consciousness causes the electron’s wave
function to “collapse” merely by looking at the measurement, and Hugh Everett’s Many Worlds
Interpretation, which claims that the universe splits into two real and coequal parts whenever a
measurement of a quantum property is made.

For me, the simplest and most obvious way to solve the problem is by getting rid of the false
assumption of linear superposition. Of course that would mean that the equations that underlie
physics are non-linear and thus inherently “unsolvable,” which terrifies scientists and engineers
who must be able to solve equations in order to make predictions.

52. Mind → Consciousness → Uncertainty → Information → Energy (Mass) → Space-Time.

When I began documenting my journey of discovery in this journal several years ago, I had no idea
where it would lead, but it seems everything I’ve learned so far can be summarized in the
relationship shown above. The Szilárd equation, E = k T Loge 2 per bit, has been experimentally
verified by Toyabe et al in 2010, so there is absolutely no doubt that energy is equivalent to and
derived from information. Claude Shannon’s groundbreaking work in the field of information
theory defines information in terms of probabilities: S = Σ pk Log2 (1/pk) for k = 1, 2, … , N
possible outcomes. The expression for S is maximized when all of the probabilities are equal, and
then S = Log2 (N), which is the same as thermodynamic entropy according to Boltzmann’s
definition.

Probability is based upon uncertainty. It should be remembered that occurrences determine


probabilities, and not the other way around. Probabilities have no meaning in deterministic systems
where the outcomes are computable; in those cases, the probability of one outcome is always unity
while the probabilities of all other outcomes are zero, so the amount of information, S, associated
with those systems is zero. Furthermore, according to the Lucas-Penrose argument about Gödel’s
Incompleteness Theorem, a deterministic algorithm cannot produce consciousness; therefore, it
seems that consciousness and uncertainty are codependent.

Space-time separates events resulting in the law of causality that limits the speed of communication
to the speed of light. Communication is the transfer of information, but information can only exist
in the presence of uncertainty. Take for example, the famous EPR paradox, where two parts of a
“system” in a state of quantum entanglement appear to instantaneously communicate with each
other across a distance. This communication is only apparent, however, because both parts of the
entangled system have complete “knowledge” about each other with a total lack of uncertainty, and
no communication can occur between them because there is no information to communicate. As far
as an entangled system is concerned, there is no need to apply the law of causality to its parts, and
no space-time separations can apply to them.

In summary, both energy-matter and space-time are dependent on information and uncertainty,
which is a state of consciousness within the mind. The words of the renowned physicist James
Hopwood Jeans sums it up nicely. Here’s a quote from The Mysterious Universe (1937):

“Today there is a wide measure of agreement, which on the physical side of science approaches
almost to unanimity, that the stream of knowledge is heading towards a non-mechanical reality; the
universe begins to look more like a great thought than like a great machine. Mind no longer appears
as an accidental intruder into the realm of matter; we are beginning to suspect that we ought rather to
hail it as a creator and governor of the realm of matter.”
53. Did the Universe Really Have an Actual Beginning or Is It Eternal?

This seems to be a ridiculous question because almost everyone agrees that the universe “began to
exist” at some time in the past. Most scientists believe this occurred around 13.8 billion years ago
while some religious fundamentalists insist it began exactly 6,022 years ago (as of 2018CE),
according to Archbishop James Ussher’s biblical chronology. Physicists say the universe started out
as a “big bang” event following a very brief period of inflation when it expanded from a proton-
sized object to about the size of a grapefruit. Conventional scientific wisdom says the total amount
of mass-energy in the universe is constant throughout time, making the universe’s initial state
incredibly hot and dense. In fact, some scientists think it started as a singularity of infinite density,
even though Nature seems to abhor singularities even more than She abhors a vacuum.

In my essays Order, Chaos and the End of Reductionism and The Universe on a Tee Shirt, I propose
a model of the universe based on a curved, expanding temporal dimension called “time.” In order
to measure curvature, you need something “flat” in comparison, and a flat spatial dimension serves
that purpose. (All free-falling observers are surrounded by space that appears absolutely flat to
them. Space appears curved only to observers who are being accelerated and prevented from falling
freely.) The curvature of time is measured by a radius, originating from a temporal center of
curvature that could be interpreted as the Beginning. My proposed model disregards the law of
conservation of mass-energy on cosmological scales, allowing the total mass-energy of the universe
on the Now surface to increase over cosmological time. This avoids a density singularity at the
origin, but the curvature still becomes infinite as the radius of curvature is approaches zero. There’s
still a problem with this because Nature considers any infinity – even if it’s only a curvature – as
anathema and a curse.

Then the thought occurred to me that the so-called Beginning at the point of origin may only be a
mirage. According to my model, every observer sees the Beginning recede into the distance (and
into the past) at the speed of light, c. In fact, the speed of recession is what defines c in this model.
But I’ve always maintained the universe is radically relativistic, meaning there are no external
standard units of length, mass, or time which can be used to measure things.

The meter and second of the MKS system of units are completely arbitrary, originally based on the
circumference of the Earth (1 m = one ten-millionth of the distance from the north pole to the
equator) and its rate of rotation (1 sec = 1 / 86,400 of a solar day), and the kilogram was defined as
the weight of an arbitrary volume of pure water (0.001 m3) at the temperature of maximum density
(4° C). But the only way any quantity can be measured in absolute terms is in relation to itself. In
other words, the only possible measurements are dimensionless ratios, such as a change of a
parameter with respect to the parameter itself, e.g. ΔX / X. This suggests that time should be
measured as the change in curvature with respect to curvature instead of using a radius of curvature
measured in arbitrary units like kilometers or light-years. The function that satisfies this
requirement is a logarithm, since the summation  dX / X = Log X. This suggests that time may not
be linear with a Beginning at a fixed point of origin; instead, it is defined using logarithms with
values between –  and + .

When a function is plotted on a logarithmic scale, there is no zero point on the X scale, only ever-
smaller orders of magnitude, 10 0, 10 -1, 10 -2, 10 -3, etc. If cosmological variables were scaled on a
logarithmic time scale, the so-called Beginning would always appear in the remote past as receding
at the speed of light to every past, present, and future observer, making it utterly unapproachable
and unreal. The speed of light is unapproachable in the same way: No matter how “fast” you are
traveling, you can never approach a receding light wave because it will always recede at the same
speed, c. In other words, the universe could be ever-changing, yet eternal.
Paul Dirac noted that certain dimensionless ratios, such as the radius of the universe divided by the
radius of a proton, seem to be clustered around specific numbers, leading to his Large Number
Hypothesis. According to LNH, Newton’s gravitational constant decreases and the total mass-
energy of the universe increases over time when those quantities are expressed in arbitrary MKS
units of measurement, whereas they would be constant pure numbers when expressed as
dimensionless ratios. Lee Smolin also proposed that universal “laws” change over epochs of time,
although he didn’t quite go as far as to claim time is logarithmic. Laurent Nottale and others
proposed something called “scale relativity” as a feature of a universe based on logarithms without
any fixed dimensions based on “universal constants,” only dimensionless ratios with universal time
proceeding along as logarithmic “epochs.”

I think a universe without a Beginning should be taken seriously, although I must admit I’m unable
to prove it’s true or develop this possibility into a working theory.

54. Small Numbers of Information Bits Represent Incredibly Large Uncertainties.

I saved this essay on a 32 gigabyte flash drive, which stores 274,877,906,944 bits of data. As I
mentioned previously, people often miss the distinction between data (certainty) and information
(uncertainty). In terms of information, there are 2 274,877,906,944 or on the order of 10 82,000,000,000
different ways the thumb drive can be configured. In comparison, it is estimated there are “only”
10 86 protons in the known universe. Imagine trying to randomly guess the exact configuration of
the flash drive among all those possibilities, and that’s just the information for one flash drive!

Well, that completes my list of conjectures for now. I might include others as time goes on.