Anda di halaman 1dari 8

The Hidden Secrets of General Relativity Revealed

(According to an Amateur Scientist)

By John J. Winders, Jr.

The above photograph of Albert Einstein was provided by “sheeze” (courtesy of
Note to my readers:
You can access and download this essay and my other essays through the Amateur
Scientist Essays website under Direct Downloads at the following URL:

You are free to download and share all of my essays without any restrictions, although it
would be very nice to credit my work when quoting directly from them.
Electrical engineering was my life’s profession, but my favorite field of study has always been physics. When I retired in
2007, I made an effort to learn as much as I possibly could about general relativity and quantum mechanics in the limited
amount of time I had left. What I discovered to my surprise is that almost everything I thought I knew about general
relativity was wrong. It is now 2019. This rather short essay summarizes my findings in capsule form, and by the end
you’ll either be astounded by it or think it’s all bunk. So be it. I’m not sure even the great Albert Einstein, whose portrait
graces the cover of this essay, could have imagined all of the hidden implications of his general theory of relativity. Most of
these implications have only come to light because of fairly recent advances in quantum theory and information theory, and
it seems the majority of conventional mainstream scientists apparently haven’t caught up with them yet.
This essay is aimed at a general non-technical audience, so I tried to avoid as much math as possible while still getting the
ideas across. There are many more details that support my claims in my other essays. You can find them on my web site by
following the link on the preceding page. The list below reveals 16 key findings.
1. Gravity is not a force. Gravitation defines geodesic paths through space-time in the absence of forces.
Many physics reference books state there are four forces of nature: Electromagnetic, strong nuclear, weak nuclear, and
gravity. This is wrong. Gravity is not a “force.” Albert Einstein realized a man falling down an elevator shaft feels
nothing – no force whatsoever – and if the man holds an accelerometer on the way down, it will record zero. The fact that a
falling man feels no force at all was a deep mystery to Einstein, which prompted him to engage on a quest to unlock the
secrets of gravitation. The result of his decade-long quest was the general theory of relativity.
According to GR, when a body is allowed to freely fall near a massive object such as the Earth, it follows a path called a
geodesic through space-time, which is the longest possible path between two points. According to Euclidean geometry, the
shortest distance between two points is a straight line. But according to the peculiar geometry of space-time, a straight line
is the longest distance between two points when there is no gravitation. An external force acting on an object causes it to
accelerate, making it take a “shortcut” through space-time. The only external force that has any effect on macroscopic
scales is the electromagnetic force – the nuclear forces are only felt on scales the size of an atomic nucleus.
In short, all objects travel through space-time, and gravity defines the geodesic paths of objects that are allowed to fall
freely. An external force (which is exclusively electromagnetic) makes an object deviate from its geodesic paths, thereby
shortening the distance traveled. But why should objects be “traveling” through space-time in the first place? When I asked
the experts on an on-line Physics Forum, they told me it was a ridiculous question. (I guess shouldn’t have expected to get
the right answer to an existential question from a physics forum. The right answer is given in Finding 6, below.)
2. Einstein’s field equations seem to describe space-time as an elastic solid with thermodynamic properties.
This finding truly astonished me. Einstein’s field equations of general relativity describe a 4-dimensional field (space-time)
with attributes similar to an elastic solid. Under the influence of gravity, this material is subject to distortion – stretching,
shearing, and twisting – and in the absence of gravity, it snaps back to a perfectly “flat” condition. The Schwarzschild
metric is a solution of Einstein’s field equations, and theoretical physicist Thanu Padmanabhan revealed by rearranging the
terms of this metric at the horizon, the equation for entropy emerges: T dS = P dV – dE. In other words, space-time has
characteristics similar to an elastic solid material including thermodynamic properties such as pressure, internal energy,
temperature and entropy! You won’t find space-time described like this in very many physics textbooks.
3. The space-time field incorporates information/entropy, which measures uncertainty.
The fact that space-time has entropy intrigued the daylights out of me. My information-theory background taught me that
entropy is really a special case of information. According to Claude Shannon (the Godfather of IT), information is related
to systems having discrete states. If N is the number of states and pk is the probability of the kth sate, then according to
Shannon’s definition, information, S, is computed using the following formula.
S = – Ʃ pk log (pk) k = 1, 2, 3, … , N
It is customary to use the base-2 logarithm, and so S is expressed as “bits.” But S can also be expressed as “nats” or “decs”
by using natural or base-10 logarithms instead.
The amount of information related to a system of N states is maximized when all probabilities are equal; i.e., p k = 1/N for all
values of k. So when information is maximized with pk = 1/N,
S = – Ʃ pk log (1/N) = Ʃ pk log (N) = log (N) Ʃ pk = log (N)
Compare the above expression to how Boltzmann defined thermodynamic entropy: S = kB loge (N). The only difference
between the two definitions is that Boltzmann’s formula uses the natural logarithm multiplied by the constant k B, so S is
expressed in thermodynamic units of energy and temperature instead of bits as in Shannon’s formula. Entropy is simply
maximal information, which is attained when all states have the same probability and produce maximal uncertainty. Thus it
seems that Einstein’s field equations are describing a physical system having discrete states with an inherent uncertainty.
4. There is no “conflict” between the theory of gravitation and quantum mechanics.
Uncertainty forms the very foundation of quantum mechanics. One experiment after another has proven that local objective
reality simply does not exist at the quantum level. Nothing exists in a definite state until it is observed. The quantum level
consists of information and uncertainty. This brings up yet another falsehood you’ll see stated repeatedly in the scientific
literature, namely that quantum mechanics and gravity are “incompatible.” How can two things be “incompatible” when
both of them emerge from the very same principles (uncertainty and information)? The Bekenstein-Hawking formula
brings quantum mechanics and gravity together in a simple and elegant way as described in Finding 6. It is odd that
physicists keep searching without success for evidence of an elusive quantum particle known as the “graviton.” The
hypothetical graviton is a massless spin-2 particle that is supposed to mediate the force of gravity. However, why would
there be a particle to mediate gravity when gravity isn’t really a force?
5. True “black holes” cannot be contained within the universe.
The above statement is the closest thing to heresy you can say in the physics community. Black holes are a central part of
the life’s work of many a physicist, resulting in fame, fortune, and Nobel Prizes for some of them. So be it. I stand by my
statement with an unshakable conviction, and I’m encouraged that a growing number of physicists are gradually coming to
the same conclusion. Even Stephen Hawking himself began to entertain doubts about the existence of true black holes
shortly before he passed away in 2018.
A true black hole supposedly forms when an object possessing positive mass-energy is contained within its own
Schwarzschild radius. An “event horizon” then forms at the Schwarzschild radius, resulting in all sorts of paradoxes that
have yet to be resolved. But the truth of the matter is that mass-energy both inside and outside the radius must be taken into
account, including negative gravitational energy associated with space-time distortion. Ordinarily, this negative
gravitational energy is negligible compared to its positive mass-energy of the gravitating object, so it is ignored. However,
as the size of the gravitating object approaches the size of the Schwarzschild radius, negative gravitational energy is no
longer insignificant and it starts to reduce the size of the Schwarzschild radius. Reducing the size of a physical object until
it fits within its own Schwarschild radius is an impossibility – the intense negative gravitational energy associated with the
space-time distortion would ultimately cancel all of the object’s positive mass-energy, thereby reducing the Schwarzschild
radius to zero! Physicist Abhas Mitra of India derived the same result directly, by solving Einstein’s field equations.
6. The combined universe is equivalent to an expanding “black hole.”
Although forming a black hole within the universe is not possible due to the negative gravitational energy associated with
space-time distortion surrounding a compact object, this restriction does not apply to the universe as a whole simply because
nothing surrounds the universe. Einstein’s field equations generate an expanding universe with a quantum-mechanical
surface defined by the Bekenstein-Hawking (B-H) equation. That surface is a true universal “event horizon” where
everything exists in the present moment and is encoded as information called B-H entropy! This is equivalent to the
holographic principle, except for the fact that information encoded on the hologram isn’t in some far-distant place
surrounding our 3-dimensional world. The surface of the hologram is right here and right now, is expanding, and surrounds
a 3-dimensional historical record of the past we see when we look out into space.
The expansion is caused by an asymmetrical, or curved, temporal dimension. The curvature is defined by a radius,
measured in spatial units (inches, feet, kilometers, etc.). The implied center of curvature is the Beginning of the temporal
dimension. The temporal dimension “flattens” as the radius of curvature lengthens, and this lengthening defines the speed
of light, c. Mathematically, the event horizon is a surface of uniform temporal curvature (time = constant) oriented
perpendicular to the temporal dimension. This surface can be represented schematically as an expanding sphere, but it must
be stressed that this is only a schematic representation and not a 3-D physical model. There is no space “outside” the
Earlier, I promised to explain why all objects travel through space-time: It is because they are being carried along by an
expanding universal event horizon with a temporal radius increasing at the speed of light.
7. Spatial symmetry requires three degrees of freedom in space.
Spatial symmetry means there are no preferred spatial directions or locations. According to Emmy Noether’s theorems,
spatial symmetry exists if and only if linear momentum and angular momentum are conserved. The angular momentum of a
moving object is equal to the cross product of the object’s linear momentum vector and the distance vector from the
observer to the moving object. The cross product of two vectors can be properly defined only if there are exactly three
orthogonal directions. Thus, spatial symmetry requires the existence of three orthogonal directions or degrees of freedom,
which we humans tend to interpret as three-dimensional space. However, all three of those “dimensions” always point
along the radius of temporal curvature back toward The Beginning.
8. Spatial symmetry requires the property of mass-energy.
Due to the fact that spatial symmetry requires conservation of linear momentum and angular momentum, it is obvious that
mass must be present in order for momentum to exist in the first place, since momentum equals mass times velocity.
Therefore, any object associated with the spatial dimension must possess mass in order to support the conservation of
momentum laws, with energy simply being mass in a different form according to the special theory of relativity.
9. Accelerating objects always move in directions pointing toward the past, making proper time “slow down.”
Everything “inside” the B-H surface represents events in the past, which were once on the B-H surface. As long as an
object stays on its geodesic path, it is carried along by the B-H surface away from the past (this is the origin of the
cosmological red shift). When an object accelerates and departs from its geodesic path, it always travels in a direction
pointing toward the center of temporal curvature; i.e., it travels in a direction pointing toward the past, causing its proper
time to fall behind clocks on their geodesic paths. Proper time is lost only when external forces cause an object to deviate
from its natural geodesic path.
When a clock is in uniform motion relative to an observer, the clock appears to slow down from the observer’s perspective;
however, proper time never actually slows down for a clock in uniform motion. The apparent slowing down is because
distances are foreshortened in the direction of motion in the moving object’s frame of reference. Take for instance cosmic
ray particles produced in the Earth’s upper atmosphere. When the particles travel near the speed of light relative to an
observer on the Earth’s surface, they appear to decay more slowly than they would “at rest.” This is because the distances
from the upper atmosphere to the Earth’s surface is significantly foreshortened in the moving particles’ frames of reference
and those particles can then travel all the way from the upper atmosphere to the Earth’s surface before they decay.
Distance foreshortening also explains the so-called Twins Paradox. The traveling twin’s proper-time clock slows down only
during the acceleration phases of the journey. Foreshortening makes it possible to complete the journey with fewer ticks on
the traveling clock compared to the number of ticks it takes to complete the journey shown on the stay-at-home clock.
10. Clocks don’t actually measure time; they only record physical changes.
When Einstein was asked to define proper time as it relates to relativity, he replied that proper time is what a clock
measures. His answer was probably made in jest, because it really wasn’t true. It is true that a clock in free-fall follows a
geodesic path though space-time that maximizes proper time, or the “distance” between two events in space-time. For
example, a clock floating weightlessly in the International Space Station will record more time when it completes an Earth
orbit than a clock sitting on the ground. That’s because the space-station clock is running at the maximum possible speed,
whereas the Earth’s surface is pushing the earth-bound clock upward, causing it to accelerate and run slower. But what are
those two clocks really measuring?
Every clock records changes due to some physical process, such as the number of grains of sand falling through an hour
glass, the number of revolutions the Moon makes by circling the Earth, the number of rotations the Earth makes around its
axis, the number of swings a pendulum makes in a grandfather clock, the number of oscillations a mainspring makes in a
watch, etc. There is something else that can be counted: the number of bits of information being added to the space
surrounding the clock, or more precisely the increase in density of B-H entropy in the vicinity of the clock.
Most of the so-called laws of physics are mere mathematical descriptions of systems undergoing changes. All laws are tied
to one truly Fundamental Universal Principle, which is revealed in Finding 13.
11. Space, matter, and energy are a result of temporal asymmetry.
We have seen that the spatial dimension is a means of measuring temporal asymmetry using a radius of curvature expressed
in spatial units.
Time → Temporal Asymmetry → Space
Temporal asymmetry (curvature) is a necessary and sufficient condition for change to occur. Spatial symmetry (flatness)
maximizes uncertainty, and in the absence of external forces causing an object to accelerate, it will follow a geodesic path
such that space attains symmetry (flatness) in the object’s frame of reference. Spatial symmetry demands conservation of
linear and angular momentum per Noether’s Theorems, which requires mass. Energy is another form of mass, per the
special theory of relativity. Angular momentum is the cross product of two vectors in space, which requires three degrees of
freedom (customarily interpreted as three-dimensional space).
Space → Spatial Symmetry → Conservation of Momentum → Mass-energy → Three Degrees of Freedom
12. The cosmological red shift is caused by relative proper-time displacements.
Distances between objects and observers are equivalent to displacements of proper time, meaning a distant object literally
exists in a time prior to the observer. A distant object “being in the past” is the same as if its proper-time clock had been
running more slowly than the observer’s proper-time clock since the beginning of time. Furthermore, relative slowness of
the distant proper-time clock is equivalent to receding even further into the observer’s past, further increasing the distance
between them. Every observer sees all distant objects appear as if they are traveling into the past and their clocks are
slowing down. If the Present Moment always recedes from The Beginning at a constant rate (the speed of light), these
displacement effects are proportional to distance.
13. All physical changes are tied to (and restricted by) B-H entropy density.
As discussed previously, entropy is information with maximal uncertainty (all states having the same probability). An
expanding B-H surface requires an increase of information – no expansion can occur without it. It should be pointed out
that B-H entropy is cosmological in nature, and it can be detached from the thermodynamic entropy and temperature of a
physical system. In other words, whereas B-H entropy must always increase, the entropy of an isolated physical system can
remain constant (although it can never decrease). Temperatures of physical systems, like the Sun, can be thousands of
Kelvin, whereas the B-H temperature is extremely low. According to my best guesstimate, the B-H temperature is the
tiniest fraction of a Kelvin, compared to the so-called “cosmic microwave background” temperature of 2.725 K. Over time,
however, we would expect all temperatures in the universe to converge to the B-H temperature. Be that as it may, in order
for any changes to physical systems to occur, these must be accompanied by increases in the entropy density on the B-H
surface. This is because every physical change leads to pathways to additional states, increasing overall uncertainty that
generates additional information. Every tick of a clock and every revolution of a planet around its star is a change tied to
this Fundamental Universal Principle.
14. The B-H temperature is inversely proportional to the age of the universe.
The B-H surface temperature, TB-H, is given by the following equation.
TB-H = ħ c3 / (8π GM kB) where ħ is the Planck constant, c is the speed of light, G is Newton’s gravitational
constant, M is the total mass-energy of the universe encoded on the B-H surface, and kB is
Boltzmann’s constant.
2 GM / c2 is equal to the Schwarzschild radius of the universe, or the radius of temporal curvature, RB-H. By definition, the
speed of light equals the rate RB-H increases, so RB-H equals c times the age of the universe, tU.
TB-H = ħ / (4π tU kB)
Assuming the universe is approximately 14 billion years old,
tU = 4.41 × 10 17 sec
ħ = 1.05 × 10 -34 joule-sec
kB = 1.38 × 10 -23 joule/K
∴ TB-H = 1.37 × 10 -30 K
The universe is pretty chilly at present, but it wasn’t always this cold. When the age of the universe was equal to one Planck
time, TB-H was right around the Planck temperature of 1.41 × 10 32 K according to the B-H temperature equation.
15. The energy conservation law is an approximation.
The cornerstones of science are the laws of conservation of energy, momentum and angular momentum. The momentum
conservation laws are a consequence of spatial symmetry, but energy isn’t strictly conserved because time isn’t truly
symmetrical. Temporal asymmetry exists to allow the total mass-energy encoded on the B-H surface to increase, which it
must do in order for expansion to occur. At the present time, the B-H surface temperature is extremely low and the temporal
dimension is virtually flat; therefore, any deviations from the law of conservation of are very small and are hardly
noticeable over short time intervals. So although the law of conservation of energy is only an approximation, it’s still a
pretty good approximation.
16. Information is equivalent to mass-energy.
According to the Landauer Principle, one bit of information is equivalent to the following quantity of energy, E.
E = kB T Loge 2 , where kB is Boltzmann’s constant and T is absolute temperature
The energy-information equivalence has been experimentally verified by a team led by Mang Feng of the Chinese Acadamy
of Sciences in Wuhan, using a quantum system where bits of information was transformed into heat energy in a reservoir
where heat energies were quantized. By extension of special relativity, matter must also be equivalent to information.
This harkens back to John Wheeler’s “It from bit” conjecture. The idea that physical reality is based on information is
becoming increasingly accepted by theoretical physicists. In a physical system like the one used in the Mang experiment,
the temperature T in the Landauer equation refers to the temperature of the particular system as measured using an ordinary
thermometer. But we have seen that space-time itself has its own thermodynamic temperature, T B-H, which is detached from
the temperatures of physical systems. It is the cosmological temperature of space-time itself that defines the equivalence
between mass-energy and information in the Landauer equation.
The number of bits equivalent to the mass of a single electron is mind-boggling. An electron weighs in at 9.11 × 10 -31 kg,
and yet this tiny mass is equivalent to 6.26 × 10 39 bits of information at the present B-H surface temperature. The smallest
“black hole” that is theoretically possible has a mass around one Planck mass. We saw earlier that when tU was equal to one
Planck time, the B-H temperature was equal to the Planck temperature. It’s interesting to note that according to the
Landauer Principle, one Planck mass at the Planck temperature is equivalent to one nat of information (where 1 nat = 0.69
bit). Could it be possible that the entire material universe began as a Planck-scale object equivalent to just one nat of

Information lies at the root of physical reality – time, space, matter and energy. We surely see this from a quantum
mechanical perspective, and thanks to the B-H formula, we also see it from the general theory of relativity. What most
people don’t realize is that information is not a thing, but is just a measurement of uncertainty. What is the ultimate
meaning of this? It can only mean that physical reality springs from uncertainty, and uncertainty lies in thought, and
thought can only exist in the mind. Therefore, the mind should be at the root of physical reality, almost like a living being.
The hidden secrets of general theory of relativity are enough to convince me that idealism is a more reasonable possibility
than materialism.
Materialists will argue that the mind can only emerge from a sufficiently-complex physical brain; thus, to the extent that
logic and mathematics are mental concepts, they too must emerge from a physical brain. But if this were true, does it mean
logic and mathematics didn’t exist before there were physical brains? And if that were true, how could a logically-
consistent universe have evolved prior to the existence of logic?
Addendum – A Short Tutorial on Information Theory
The term “information” is tossed around quite a bit in popular science literature. Unfortunately, it seems that many (most?)
authors who use this term don’t have the foggiest notion of what information really is. I want to clear up the confusion by
discussing the ground-breaking work of a fellow electrical engineer named Claude Shannon, whom I consider to be the
Father of Information Theory.
During WWII, Shannon joined Bell Labs, which was under contract with the National Defense Research Committee.
During the war he worked on cryptography, among other things, which led to his post-war research on the communication
problem, i.e. finding the most efficient way to send signals through a noisy communication channels while reducing errors
to a minimum. Shannon realized that not all signals have the same “quality” or informational content, so the first order of
business was to identify exactly how “information” should be measured. He concluded that communication is worthless if
it doesn’t contain a surprise factor (yes, Shannon actually used the term surprise). In other words, surprise and uncertainty
are intrinsic properties of information. Suppose a TV announcer on the Channel 2 Six O’clock News states, “And in other
news, the Sun rose in the east this morning.” That message contains zero information because none of the viewers were
surprised in the least by it, so sending it was both a complete waste of time and a waste of television bandwidth. On the
other hand, suppose the announcer said, “And in other news, astronomers have confirmed that a giant asteroid will collide
with the Earth tomorrow night at 11:45 PM EST.” That message contains a lot of information simply because it was totally
unexpected by the viewers plus it raises a host of further questions and uncertainties in the minds of the viewers.
Shannon based his definition of information on probability, which is how uncertainty is expressed mathematically. 1 It’s
clear that because the sender of a message is completely certain about its contents, information only exists at the receiving
end of the communication channel. The true relationship between the sender and receiver is often muddled because people
tend to conflate information with data, which are completely different things (although both are measured in bits). Data
represent definition and certainty, which are the opposite of uncertainty.
The key to efficient communication is to strip as much redundancy, or non-informational bits, as possible from the data
before sending the message into the communication channel.2 The process of removing redundancy is referred to as data
compression. Lossless compression strips away only the redundant bits, leaving information intact. By removing all
redundancy from binary data, the a priori probability of receiving 0s equals the a priori probability of receiving 1s. This
maximizes the uncertainty and information at the receiving end, so that one bit of data sent equals one bit of information
received.3 A condition of maximal information is equivalent to entropy.
The Landauer Principle, which has been experimentally verified, states that energy and information are interchangeable,
which also makes information interchangeable with matter because e = m c2. Einstein argued that space and time are
meaningless in the absence of matter and energy, so space and time must rely on information as well. If we accept the fact
that all of physical reality (whatever that means) is comprised of information, then we also must accept the fact that reality
is based on uncertainty, which can only exist in the mind of a conscious observer. After all, how could inanimate objects
ever be “surprised” or “uncertain” about anything? Well, maybe they could if they operated on the quantum level where
everything exists as a cloud of uncertainty. This suggests a brain – basically an electro-chemical machine – could possess
consciousness and be capable of experiencing surprise and uncertainty by interacting with reality on the quantum level in
addition to the classical world of sight, sound, smell, taste and touch.
There is a deep and subtle connection between general relativity and information, but this connection cannot be fully
grasped as long as information is conflated with data. I hope this short tutorial helped clear up this confusion.

1 Recall to the formula S = – Σ pk log (pk), where pk < 1 reflects uncertainty about state k. If any pk = 1, then S = 0. Certainty destroys information.

2 A channel can take on various forms. It can be temporal, as when data are transmitted using a series of timed radio pulses, or spatial, when data are
burned onto the surfaces of CDs or DVDs. In either case, data can be corrupted by noise in or on the channel, which calls for error-correcting
measures (see Footnote 3, below).

3 Unfortunately the ideal 1:1 relationship cannot be achieved when the data are altered in the presence of noise. The second phase of the
communication process is to cleverly encode the data stream in order to counter the noise. The famous Shannon-Hartley theorem states the capacity
of a channel, C, in bits per second depends on the channel bandwidth, B, in Hz and the signal-to-noise ratio, S/N: C = B log 2 (1 + S/N). If the data
rate is less than C, then it is theoretically possible to encode the data in a way that the error probability at the receiving end is reduced to zero. The
encoding process must introduce redundancy back into the data, thereby reducing the information per data bit below 1:1. However, this is a necessary
trade-off in order to assure reliable communication through noisy channels.