Anda di halaman 1dari 26

# Why There Are No True Black Holes

## (An Exposé by an Amateur Scientist)

by John Winders
You can access and download this essay and my other essays through the Amateur

You are free to download and share all of my essays without any restrictions, although it
would be very nice to credit my work when quoting directly from them.
Black holes have been a favorite topic of astrophysicists, cosmologists, and theoretical physicists for a
very long time. Almost every theoretical paper published in the past few decades contains at least some
elements of black holes, or cites other papers that do. Black hole topics have generated a number of so-
involves the AMPS thought experiment, which has propelled theoretical physics into the realm of
science fiction. More on that later. However, I'm extremely confident that all of these “paradoxes”
have the same resolution; namely, that true black holes simply do not and cannot exist.1
First, a bit of background: Albert Einstein published his paper on general theory of relativity (GR) in
1915, where he produced his famous field equation relating space-time curvature to mass-energy. In
the same year, Karl Schwarzschild found an exact solution to the Einstein field equation in empty space
outside the gravitating body. This solution, published in 1916, is called the Schwarzschild metric:2

## c2dτ2 = (1 – rs/r) c2dt2 – (1 – rs/r)-1 dr2 – r2 (dθ2 + sin2 θ dφ2)

This metric is expressed in spherical coordinates (r, θ, φ), where r is the proper distance from the center
of the gravitating body. It only works in empty space, away from the gravitating body. The distance rs
is the Schwarzschild radius, which is very important, as we shall see. The symbol τ represents proper
time as shown on a clock at distance r, and t represents time as shown on a clock located at r→∞. For
situations that don't involve changes in tangential directions θ and φ, the term r2 (dθ2 + sin2 θ dφ2) can
usually be ignored. As usual, c is the speed of light.
The Schwarzschild radius is found by this formula: rs = 2MG/c2, where G is Newton's gravitational
constant and M is the mass of the gravitating body, or more accurately its total mass-energy expressed
as a mass. We will see later why this distinction is so important.
Every object in the universe that has mass has a Schwarzschild radius. The Earth has one too. It's
around the size of a marble. Time on the orbiting Moon and artificial satellites could be accurately
compared by using the Earth's rs value in the Schwarzschild metric and solving dτ/dt. A very different
situation occurs if the Earth's mass is squeezed inside that marble-sized radius: The Earth turns into a
black hole. Now, instead of rs being a hypothetical radius buried somewhere within the bowels of the
Earth, it becomes something much more sinister: It defines an Event Horizon.
Returning to the Schwarzschild metric, we see there are two places where things go completely
haywire: At r = 0 and at r = rs, known as singularities. Basically, c2 τ2 blows up at these two places.
The first singularity, at r = 0, is at the center of a black hole. It is typically interpreted as being a place
where all the black hole's mass is concentrated. It is near this singularity where objects are said to be
“spaghettified”3 because of enormous tidal forces in that region. However, I believe this is a
misinterpretation. You see, space and time reverse roles when r < rs. So r = 0 isn't really a place at all.
It's a time when everything falling inside the black hole comes together. Or you could say it's the end
of time itself.
The r = rs singularity is equally problematic. An object falling into a black hole from far away will
accelerate until it's traveling at the speed of light. One might ask, what that speed is relative to, and I
would say it's relative to the Event Horizon. So if Alice is falling into a black hole and there were a
stationary warning sign hanging one mile from the Event Horizon saying, “Danger! Event Horizon
1 Even Stephen Hawking has been hedging his bets lately. Now he thinks black holes are possibility more like fuzz balls.
2 Some argue that the Schwarzschild metric was actually derived by David Hilbert, another German scientist. But we're
not going to quibble over this.
3 The term spaghettification was possibly invented by Neil Degrasse Tyson.

1
Ahead,” she'd be zooming past that sign at the speed of light in her reference frame. The problem is
that when she reaches the Event Horizon, her acceleration suddenly jumps to infinity. So although
Alice is already traveling at light speed, her speed would increase by some indeterminate amount.
Would she end up falling at two times light speed, or ten times light speed? Who knows?
Does any of this make any sense? This should have been the first clue that something is very, very
wrong with the whole idea of a black hole. But there's more. According to orthodox black hole theory,
the escape velocity at the Event Horizon is the speed of light, c. Okay, fine. So you would assume that
a light wave could orbit the black hole at the event horizon, right? No. It turns out that light can orbit
the black hole all right, but only at a distance of r = 1.5 rs. Huh? If escape velocity at r = rs is c, then
wouldn't escape velocity at 1.5 rs be less than c? And if so, then why wouldn't light simply zoom off
into space instead of orbiting around the black hole? So far I have used very little math – just reason
and logic that a child can understand and judge the whole thing to be nonsensical.
Albert Einstein didn't like the idea of black holes either, so he decided to find out how and if a black
hole could be manufactured from dust particles. A dust particle is simple because it doesn't exert any
pressure – it's just attracted to other dust particles. What Einstein found was that if a dust particle falls
toward a black hole, its' trajectory gets to be very unstable near the Event Horizon. At some distance
outside rs, the dust just veers off and never reaches the black hole. In simple terms, Einstein concluded
that making black holes from dust is impossible. He published his results in 1939 in a paper entitled
“On A Stationary System With Spherical Symmetry Consisting of Many Gravitating Masses.” Now
Einstein was no slouch when it came to GR, but for the most part people still ignored his paper.
J. Robert Oppenheimer also came to a similar conclusion about black holes in 1939. He published a
paper with Hartland Snyder entitled, “On Continued Gravitational Contraction.” They found that, yes,
a star could collapse into a black hole; however, it would take an infinitely long time for that to
happen.4 As far as we know, the universe hasn't been around for an infinitely long time, so it seems that
black holes are just a wee bit problematic from a practical standpoint. But that's not all …
In the 1970s, yet another major “paradox” came to light – the information loss paradox. I'll try to
simplify this as much as possible. According to GR, when an object is consumed by a black hole, the
object is gone forever as far as the universe is concerned. The problem is that all the information
(entropy) of that object disappears along with it, and that's a huge problem because it violates the
second law of thermodynamics by reducing the entropy of the universe. That's the same as destroying
information, which is equally as bad. In the 1970s Stephen Hawking and Jacob Bekenstein worked on
this problem and came up with a solution. If the entropy of the material falling into a black hole is
transferred to the black hole, then the surface of the black hole should have a temperature because the
temperature of an object is equal to its entropy divided by its mass-energy according to
thermodynamics. Since a black hole should emit thermal black-body radiation like any other object
having a temperature, in theory the information consumed by the black hole is sent back into the
universe at a later time encoded in the black-body radiation.
Problem solved … well, sort of … because it didn't take very long for the problem to rear its ugly head
again. This time it had to do with quantum mechanics. You see, a black hole doesn't just spit out
black-body radiation. It also spits out particles called Hawking radiation. This is due to the fact that
the vacuum of space is filled with virtual particles. An electron and its anti-particle, the positron, can

4 This was the same conclusion I came to when I first learned about black holes back in junior high school. My
reasoning was if an object falling into a black hole essentially stops before it gets to the Event Horizon (as seen by a
distant observer), how the heck did all that stuff get in there in the first place?

2
randomly emerge from the vacuum for a short time and then disappear. The virtual pair can be
engineered by “borrowing” energy from the vacuum as long as the energy is paid back in due time.
This has to do with the Heisenberg uncertainty principle: ΔE ∙ Δt ≈ ħ. The more energy ΔE that is
borrowed, the shorter the payback time Δt becomes. Some people imagine pairs of Volkswagens or
refrigerators emerging from the vacuum, but unfortunately that can't happen. The upper limit of ΔE is
the Planck energy, which is equivalent to the Planck mass of 0.022 milligrams.5
Now suppose a pair of virtual particles pops up right next to the Event Horizon, and one of those
particles is sucked into the black hole while the other one zooms off into space.6 Uh-oh. How is the
borrowed energy supposed to be returned to the vacuum? No worries. You see, the particle absorbed
by the black hole has negative energy.7 So the net effect of absorbing a virtual particle is zero
borrowed energy and a decrease in the hole's mass, M. As the black hole radiates particles into space,
it steadily shrinks over time until it evaporates completely. But now we have an even bigger problem
because Hawking radiation is a stochastic quantum process, meaning that it's entirely random.
Hawking particles radiating into space carry no information about the objects that previously fell into
the black hole. There is no way, even in principle, to reconstruct what went into it from what came out.
Once the black hole evaporates, all information about what went into it is gone forever, and we're right
back to the original information-loss paradox. There was never any definitive answer to this version of
the paradox, although there certainly was a lot of hand waving and several abstruse proposals including
something called AdS/CFT correspondence.8 But theoretical physicists remain undaunted, and they
keep writing papers about black holes despite growing evidence that they might not be real.
The spit really hit the fan in 2012 when Joseph Polchinski along with three others9 came up with what
is known as the AMPS paradox. The paradox was in the form of a “thought” or “gedanken”
experiment.10 It was presented to the public in various ways, some of them making sense and others
not so much, but I'll try to keep it simple. When a Hawking pair of particles is generated at the Event
Horizon, they are entangled. Entanglement is a quantum-mechanical property that is fairly easy to
accomplish in a laboratory but very difficult to maintain in the macroscopic world of Volkswagens and
refrigerators. For example, pairs of entangled photons are created in the famous Bell's inequality
experiments, but those photons become disentangled very quickly if one of them encounters a stray
dust particle or an imperfection on a mirror. However, when one of the particles of a Hawking pair gets
sucked into the black hole, the pair are entangled forever and there is literally no way to untangle them.
Now a real problem arises when about half of the black hole has radiated away. All of the matter inside
the black hole is already entangled with the matter that radiated away a long time ago, and yet there is
still more matter left to radiate. A fundamental principle of QM called the monogamy of entanglement
principle says that there cannot be any ménages à trois involving entangled particles. What this boils
down to is that some entanglements across the EH must be broken eventually, and that is bad news.
According to Polchinski and another quantum physicist, Juan Maldacena, breaking entanglements
across the Event Horizon literally unzips space and releases enormous energy. This creates what they
called a “firewall” or a zone of incredibly high temperature at the Event Horizon, and anyone who falls
5 According to Wikipedia, that's the mass of a flea's egg.
6 Of course since the escape velocity at the Event Horizon is c, the second particle would have to be going awfully fast in
order to escape. The cover of this essay depicts a black hole with particle-antiparticle pairs being created near the event
horizon with the red ones escaping while the blue ones are being absorbed.
7 Keep this negative energy in mind. This will be used later to show why true black holes don't exist.
8 Which I completely don't understand, although I think it might have something to do with string theory.
9 Donald Marolf and two grad students, Ahmed Almheiri, and James Sully.
10 In Gedanken World, when you don't really know how or even if something can happen, you merely assume it does.

3
into one of those black holes is consumed in a violent conflagration. Of course, this is contrary to GR,
which stipulates that nothing unusual happens to a person at the Event Horizon (other than the fact that
inward velocity jumps instantaneously at the EH).
Of course, this should have been a wake-up call that maybe, just maybe, a black hole is a complete
work of fiction. But of course the exact opposite happened. AMPS fueled a flurry of conferences and
publications of many papers in an attempt to solve this latest Mother of all Paradoxes. One of the more
creative attempts to fix this problem was a paper published by Maldacena and Leonard Susskind,
known by a shorthand description EPR = ER. What this cryptic formula means is that a paper
published in 1935 by Einstein, Podolsky, and Rosen (EPR) is supposedly equivalent to another paper
Einstein and Rosen (ER) published in that same year. Specifically, they say entangled particles
described in the EPR paper are connected through wormholes that are allegedly described in the ER
paper. Well, I downloaded the ER paper and checked it out for myself. I found nothing in the paper
mentions anything about wormholes that connect different regions of space. All Einstein and Rosen
tried to do was to make charged elementary particles from mini black holes. In order to get rid of the
singularities at r = 0 and r = rs, they replaced the proper distance, r, in the Schwarzschild metric with a
new variable, u, where u2 = r – rs. With the Schwarzschild metric expressed in terms of the new
variable, u, the mini black hole no longer has an interior or an Event Horizon, which neatly takes care
of singularity issues. The variable u can take on positive or negative values, but u = a and u = – a are
the same distance from the center of the mini black hole. Positive and negative values for u do not
represent two different points in space or points in two different spaces. I found nothing in the ER
paper that indicated + u and – u were different points in space.
It gets worse. Maldacena and Susskind went on to say that instead of one middle-aged black hole
entangled with Hawking radiation from long ago, there are actually two black holes entangled with
each other through a wormhole.11 How the original black hole's doppelganger came into being is
anyone's guess. How far away is it, two miles or two billion light-years? Who knows? One last thing:
A wormhole connecting two 3-dimensional objects must pass through a fourth dimension of space.
The Schwarzschild metric only has three dimensions, and the Einstein/Rosen paper didn't say anything
about extra dimensions. Sorry fellas, but the EPR = ER proposal just doesn't hold water.
Remember when I said that a virtual particle falling into a black hole has negative energy? Well, that
negative energy comes from gravitation. It's a well-known fact that as gravity strengthens, its energy
becomes more negative. Think of it in the following way. Suppose there are only two planets in the
universe and they are separated by a very great distance. Their gravitational attraction is negligible,
and in that state, the energy of the system of two planets is zero. When the planets approach each other,
their gravitational attraction increases but the energy of the system decreases. So if the initial energy of
the system is zero, gravitational energy must be negative. If virtual particles falling into a black hole
have negative energy, so must any particles falling into a black hole. In the case of a black hole,
gravity is so strong that its negative energy completely cancels what would otherwise be positive mass-
energy of matter falling into it. So here's the dirty little secret: Since things falling into a black hole
reduce its total mass-energy, the only value of M that works in the formula rs = 2MG/c2 is zero!
The astrophysicist named Abhas Mitra came to exactly the same conclusion that M = 0, but in a much
more formal and elegant way. A number of people have reviewed Mitra's papers, and nobody has
found any mathematical errors in them. So the conclusion is inescapable: The only true black holes
that can exist are ones with zero mass. This doesn't mean that there are no very large astronomical

11 To paraphrase Will Rogers, “If entanglement got us into this mess, why can't it get us out?”

4
objects that look a lot like black holes when viewed from far away. It simply means that they don't
have external Event Horizons. Mitra calls them eternally-collapsing objects (ECOs). What keeps
ECOs from collapsing into true classical black holes is the fact that positive mass-energy is canceled by
the negative gravitational energy as gravitation goes to extremes. Also, ECOs have ultra high
temperatures and pressure from light emitted at those temperature would repel any material that tries to
collapse inward, although those temperatures may not appear to be all that extreme to an observer far
away due to gravitational red shifting. ECOs harken back to the Oppenheimer/Hartland argument that
collapsing stars never quite shrink below their Schwarzschild radii because of GR time dilation.
Here's the final nail in the black hole's coffin. According to orthodox GR, a black hole can have only
three attributes: mass, angular momentum, and electric charge.12 Although an electric field can escape
the Event Horizon, magnetic fields cannot. The reason is fairly straightforward. If a magnetic field
could escape, an Alice inside the Event Horizon could send coded messages to a Bob outside the Event
Horizon by reversing the polarity of a magnet held in her hand. The problem is that the black hole
candidates astrophysicists identify almost always have enormous magnetic fields – larger than any
other magnetic fields in the universe. Some may argue that those magnetic fields originate outside
their Event Horizons from matter spinning into them, but that argument seems a bit shaky. On the
other hand, there is nothing that would prevent ECOs from having an arbitrarily strong magnetic fields
because they don't have Event Horizons.
One final note. There is nothing fundamentally wrong with the Schwarzschild metric.13 You can
mathematically squeeze the entire Earth into a marble-sized object having a radius rs and use the metric
to study the flight of golf balls, synchronize the clocks aboard GPS satellites, and so forth as long as
r ≥ re, where re is the radius of the Earth. You get into trouble, however, when you try to physically
squeeze the entire mass of the Earth into rs. You simply can't. Nature doesn't like singularities and if
you try to create them, She'll step in and say, “Oh no you don't. Not in my universe!” So while the
Schwarzschild equation is perfectly valid, interpreting it as describing a black hole is all wrong. The
real paradox is in the space between the ears of physicists instead of in outer space. Ideas involving
black holes are akin to geocentric cosmology, and it's simply amazing that the scientific community is
still trying to salvage a set of contradictions that should have been abandoned long ago.
The conjecture that black holes are sheer fantasy doesn't have many adherents, but there are a few.
Besides Abhas Mitra, there is Laura Mersini-Houghton of the University of North Carolina. Not only
does she claim black holes don't exist, she also claims to have united quantum mechanics and GR,
which is quite a claim. Her attack on black holes is therefore based primarily on a new form of
quantum mechanics, which makes me a bit queasy. Needless to say, there has been quite a lot of push
back from the scientific establishment, who point to all the astronomical “evidence” of black holes,
ignoring the alternative possibility that astronomers are actually seeing ECOs instead.
Hawking's latest attempt of tackling the black hole paradox is a bit of a hedge. He's not quite ready to
dispense with the Event Horizon altogether. Instead of an ultra-smooth, razor-thin GR boundary
between the universe and a black hole, he now sees the edge of a black hole as sort of a fuzz ball,
containing “soft photons” that project out into space. Information about infalling Volkswagens and
refrigerators is encoded in those soft photons, although Hawking isn't yet prepared to say how this is
done. So stay tuned because things are about to get very interesting.
12 Viewed as quantum-mechanical objects, they also have entropy and temperature.
13 Stephen J. Crothers, disagrees. He's spent over ten years trying to debunk the Schwarzschild metric, claiming that it
doesn't satisfy Einstein's field equations or that it violates the foundation of those equations. Crothers' work has gained
very little traction in the scientific community, who completely dismiss his claims.

5
I have a confession to make. Earlier in this essay I told a lie that needs to be retracted. Well, it wasn't
exactly a lie, just a gross misunderstanding on my part. I stated that the distance “r” in the exterior
Schwarzschild metric is proper distance of an observer near the gravitating body. This is wrong. It's
actually the proper distance of an observer at infinity, and that makes a difference. I realized it while
reading Abhas Mitra's paper, “Non-occurrence of Trapped Surfaces and Black Holes in Spherical
Gravitational Collapse: An Abridged Version” for the fifth time.14
I'll rewrite the exterior Schwarzschild metric in a slightly different form, dropping the last term because
dθ2 and dφ2 are zero:
ds2 = – (1 – rs/r) c2dt2 + (1 – rs/r)-1 dr2
Here, ds is interpreted as an invariant proper distance. As r → rs, the (1 – rs/r) c2dt2 term disappears, so
ds2 = (1 – rs/r) -1 dr 2 ⇒ ds = dr / √1 – rs/r . Also notice when r → rs, ds blows up. I think this means that
as an observer gets closer to the EH, proper distances (at least in the radial directions) in the observer's
reference frame become infinite. This is what some authors refer to as an “apparent” horizon because it
keeps receding as an observer gets close to it. Now if the observer is falling at relativistic speeds, there
will be Lorentz foreshortening in the direction of motion, so I suppose if the observer's velocity reaches
c, those infinite distances will become finite again.15 But for velocities below c, the observer will never
reach the EH because forward distances keep getting larger.
I really love that idea of eliminating the EH because it removes a ton of paradoxes. One of these
involves poor Alice when she falls into a “black hole” and crosses the EH. According to Orthodox
Holology (OH), Alice feels no pain if the BH is large enough. That's the so-called the “no drama”
postulate. This conflicts with the quantum mechanical version of this scenario, which says a firewall
must form at the EH in Alice's reference frame.16 Actually, the “no drama” postulate never made much
sense to me in the first place, because radial acceleration is supposed to be infinite at the EH according
to OH. Suppose Alice jumps into the BH toes first with her arms raised, like she's jumping off a 30-
foot diving platform. Her toes have infinite acceleration when they reach the EH, but unfortunately the
the rest of her body still has finite acceleration, ripping off her toes. As different parts of her body pass
through the EH and have infinite acceleration, they get ripped off as well. This is definitely not cool,
and it's a far cry from “no drama” in my opinion. But if the EH is really an AH continuously receding
from Alice, then there's no problem. This lines up exactly with what her far-away lab partner Bob sees
happening to her; she hovers close to the EH but never crosses it. Problem solved. Now Bob sees what
Alice is actually experiencing, despite the fact that her clock is way out of synch with his.
Orthodox Holology originated by totally misapplying the exterior Schwarzschild metric. This metric
only applies to empty space, so the gravitating mass has to be stuck in at r = 0 in order for it to work.
So M was simply assumed to be at r = 0, and of course the metric obediently spits out a singularity at
r = 0 plus another one at r = rs. The question is whether such a thing can physically exist. Apparent
horizons validate Mitra's claim that a true black hole can never form. Of course it tries to form, but it
can't quite do it since an AH would just slip away from infalling material. An ECO forms instead.

14 I highly recommend reading this excellent paper, even though it is very pithy. It contains a wealth of knowledge and
15 It's unclear whether a material object falling into a “black hole” can actually reach light speed. I say it can't.
16 Refer to the 2012 AMPS Firewall Paradox.

6
Appendix A – Black Holes and Aristotle
Aristotelian logic forms the basis of the scientific method (or at least it did in the past). Aristotle's law
of the excluded middle is the third of the three classic laws of thought. It says that either a proposition
or its opposite must be true. The way this works in science is that if A is a scientific theory that has a
testable consequence, B, and B is shown to be false, then this proves A is false.
The scientific method can be summed up using logic symbols: A → B ∧ ¬B ⊢ ¬A.
Unfortunately, Aristotelian logic does not apply to Gedanken World. In that world, when B is shown to
be false, you merely assume A is still true and label B as a paradox that will lead to some new, deeper
understanding of reality that will more than likely turn out to be based on string theory. In Gedanken
World, it is assumed that M > 0 exists at r = 0. Then the inevitable singularity appears at r = 0 (plus an
unexpected one at r = rs) by applying the exterior Schwarzschild metric.
In the old days, scientists worried about details such as exactly what physical mechanism would allow
a system to go from state X to state Y. Albert Einstein and J. Robert Oppenheimer were such scientists.
Modern physicists refer to themselves as either experimental physicists or theoretical physicists, but
neither kind seem to be worried about such details. Experimental physics is all about doing Big
Science experiments that cost lots of money, while theoretical physics is all about doing gedanken
experiments where things occur simply by assuming they do. Fortunately, there are a few who still do
science in the spirit of Einstein and Oppenheimer. Abhas Mitra is one of them.
The Schwarzschild metric comes in two forms: An exterior form we've already discussed, and an
interior form. These are the only exact solutions to Einstein's field equations. The exterior form is
valid only in empty space,17 while the interior form is valid only within a gravitating body of uniform
density, ρ. In Gedanken World, we can imagine making a black hole by building a sphere of increasing
radius rg with this material having a total mass M = 4/3π ρ rg3. The Schwarzschild radius rs = 2GM/c2 is
initially smaller than rg, but since rs grows in proportion to rg3, sooner or later rs ≥ rg. This crossover
point is rg ≥ √ 3 c2 / 8 π ρ , which turns the sphere into a BH with an EH. Black hole defenders like to
point out that the internal pressure at the center of the isotropic sphere reaches infinity when rs = 8rg/9,
meaning that no material substance can resist being compressed into a BH even before rs ≥ rg. This
conveniently ignores the fact that in order for the interior Schwarzschild metric to work, the substance
in question must have a uniform ρ, which defines it as being incompressible.18
Another way to build a BH is using a hypothetical substance called “dust,” which exerts zero
pressure.19 Oppenheimer and Snyder came up with what seemed a plausible scheme to do this in 1939,
but Mitra shows that it was a fallacy.20 It boils down to the fact that “dust” particles with mass m > 0
would also have zero radius, meaning the gravitational energy of the dust particle is Eg = – ∞,
completely canceling out any positive mass-energy you could put into the particle.21
What about building a black hole from a sphere of real, compressible material? Unfortunately, an
equation of state for such material is not available for GR, but Mitra shows this fails also (refer to
http://arxiv.org/pdf/astro-ph/9910408v5.pdf). It seems that Nature invented gravity with an
inexhaustible supply of negative energy, and She uses it to block any attempt to form a true BH.
17 This forces us to assume that M is confined to a point at r = 0.
18 Mitra shows that large, constant-ρ objects are prohibited by general relativity: http://arxiv.org/pdf/1012.4985v1.pdf
19 “Dark matter” also appears to be a hypothetical zero-pressure substance, but that's a whole other issue.
20 Check out his paper at this link: http://arxiv.org/pdf/1101.0601.pdf
21 Unless, of course, you're able to squeeze an infinite amount of positive mass-energy into it.

7
Appendix B – Inside and Out
Appendix A discussed applying the interior Schwarzschild metric to build a BH using a spherical body
of a substance having uniform density. I will go into a little more detail here because it's very
interesting. The interior Schwarzschild metric is presented below.
c2 τ2 = ¼ ( 3 √ 1 – rs/rg – √ 1 – r2 rs/rg3 ) c2 dt2 – (1 – r2 rs/rg3 ) dr2 – r2 (dθ2 + sin2θ dφ2)
2 -1

Here, rg is the radius of the spherical gravitating body as measured in a distant observer's reference
frame. When r = rg, the interior metric becomes exactly the same as the exterior metric. As before, I'm
going to consider only a static case where dr2, dθ2, and dφ2 are all zero, so the last two terms drop out.
Dividing by dt2 and taking the square root of both sides of what's left of the equation results in this:
c (dτ /dt) = ½ ( 3 √ 1 – rs/rg – √ 1 – r2 rs/rg3 ) c
The quantity c (dτ /dt) is interpreted as “time velocity,” or the local rate of travel through time in
warped space-time. In flat space-time c (dτ /dt) = c.
Consider a group of spheres with uniform density, ρ, and diameter rg = 9. We can select rs by setting
the density parameter, ρ, in this equation: rs = 8 π G ρ rg3 / 3 c2. The quantity dτ /dt is plotted below for
three spheres of rs = 7.5, 8.0, and 8.5. (We revert to the exterior Schwarzschild metric for r > 9.)

We see right away that something strange happens when rs > 8.0. The dτ/dt values go negative in the
middle of the sphere, meaning local time is going backward with respect to time at a distant location!
Also note that dτ/dt = 0 at the center when rs = 8.0. This stoppage of time occurs where rs = 8/9 rg, the
point at which “conventional wisdom” says pressure at the center of the sphere is infinite. Yet the
sphere hasn't even “officially” turned into a BH yet, because no singularity has formed.
Clearly, gravity is doing very strange things in this situation, but reality is masked behind a faulty
initial assumption that gravitating mass per unit volume, ρ, is uniform (or dρ / dr = 0). In fact, negative
gravitational energy reduces effective mass-energy, creating an effective density, ρ' < ρ. Consequently,
dρ' / dr < 0, so crazy curves like those shown above are prevented from emerging in the real world.

8
Appendix C – A Bridge Too Far
There is much talk lately about Einstein-Rosen (ER) bridges and how they connect different parts of
the universe or maybe even different universes. Of course this all has to do with black holes. Once
BHs are accepted as real, anything is possible. ER bridges come from paper written by Albert Einstein
and Nathan Rosen in 1939, entitled “The Particle Problem in the General Theory of Relativity.”
In 1939 there were only about five known particles: the electron and its antiparticle, the positron, the
proton, the neutron, and the neutrino.22 That was about five particles too many for Einstein, who
believed that they should all be replaced with solutions of his field equations. So that's what he and
Rosen set out to do in their paper. They used the following form of the exterior Schwarzchild metric.
ds2 = (1 – 2m/r) dt2 – dr2/(1– 2m/r) – r2 (dθ2 + sin2θ dφ2)
As usual, r is the radial distance from the center of a spherical coordinate system in the frame of
reference of a distant observer. The constants G and c are set to 1, with the mass of a particle equal to
m. Since this the exterior Schwarzschild metric and it only applies to empty space, it must be assumed
that m is located at r = 0, so naturally there will be singularities – a mini black hole. ER clearly
recognized this. Quoting from their paper:
“If one solves the equations of the general theory of relativity for the static spherically symmetrical case, with or
without an electric static field, one finds that singularities occur in the solutions.”
ER then devised a clever trick with a change of variable, u2 = r – 2m, altering the solution thusly.
ds2 = [u2/(u2 + 2m)] dt2 – 4 [u2/(u2 + 2m)] du2 – (u2 + 2m)2 (dθ2 + sin2θ dφ2)
The radial distance, r > 0, is replaced by a new variable u2 = r – 2m over the range – ∞ < u < ∞. The
above expression does not have any singularities over this infinite range, but it must be noted that the
variable u = ± Ö r – 2m is not a physical distance. (Here's a question to ponder: what is the square root
of four feet? Is it ± two feet maybe?) If we reduce ordinary three-dimensional space into a two-
dimensional sheet, the interior of the mini black hole disappears entirely in “u-space,” replaced by a
“bridge” at u = 0 that connects a pair of two-dimensional “u-sheets” as depicted below.

One sheet corresponds to u < 0 and the other sheet corresponds to u > 0. However, the “bridge” or tube
connecting the two sheets should not be thought of as poking through some extra-dimensional space in

22 Paul Dirac proposed the positron in 1928 and Carl Anderson discovered it in 1932, the same year James Chadwick
discovered the neutron and Enrico Fermi proposed the neutrino, although the first observation of a neutrino had to wait
until 1956.

9
the vertical direction. ER were quick to point out that their scheme should not be interpreted as a
bridge between two separate physical spaces. Quoting from their paper:
“If we consider once more the solution from the standpoint of the information we have acquired from the
Schwarzschild solution, we see that there also the two congruent halves of the space for x1 < 0 and x1 > 0 can be
interpreted as two sheets each corresponding to the same physical space.” [emphasis added]
I use the geometric definition of the word “congruent” from the dictionary: (adj.) identical in form,
coinciding exactly when superimposed.
The ER theory is just another failed attempt to build a bridge between particle physics and general
relativity, thereby unifying GR with QM; but it's a bridge to nowhere. Quoting ER again:
“On the other hand one does not see a priori whether the theory contains the quantum phenomena. Nevertheless
one should not exclude a priori the possibility that the theory may contain them.”
The bottom line is there is nothing in the ER paper to imply that black holes, either mini or maxi, create
bridges or wormholes that connect remote regions of space. Those apply only within the realm of
science fiction. Those who believe in wormholes try to justify these fantasies by twisting the ER paper
into unrecognizable knots and then presenting it as an appeal to a higher authority.
Finally, I can find nothing in the ER paper that is remotely related to quantum entanglement. Yet there
are modern physicists who insist that quantum entanglement is equivalent to an ER bridge or a
wormhole. This non-existent relationship between quantum entanglement and non-existent wormholes
has been carried to some truly bizarre extremes. Make-believe objects like the ones depicted below are
now popping up fairly regularly in peer-reviewed scientific journals. (I am not making this up.)

In case you're wondering what the objects above are, they're wormholes emerging from black holes that
are entangled. The one on the left connects three separate regions of space-time and the one on the
right connects four regions. The red circles are wormhole Event Horizons (or firewalls maybe?).
An esteemed west coast professor recently gave a talk on the topic of entanglement = wormholes. He
explained rather nonchalantly how one would go about creating a pair of entangled black holes. He
said first you create many pairs of entangled particles and collect them into two bunches. Then you
send those bunches of particles off to opposite ends of the universe and squeeze them until they turn
into black holes. It's easy. (I swear I am not making this up. You can watch him saying this on
YouTube.) I wonder what new areas of research the good professor will pursue when the physics
community finally come to their senses and realize there are no true black holes.

10
Appendix D – Fuzz or Fire?
A workshop was held in August 2013 at the Kavli Institute for Theoretical Physics of the University of
California, Santa Barbara. The title of the workshop was “Black Holes: Complementarity, Fuzz, or
Fire?” and it was coordinated by Raphael Bousso, Samir Mathur, Rob Myers, Joe Polchinski, and
Leonard Susskind, with Don Marolf acting as “scientific advisor.”23 The charter of the workshop
published on its web site includes the following sentence.
“Understanding the quantum properties of black holes – the nature of black hole entropy, and the fate of black
hole information – has been a driving problem in theoretical physics, leading in particular to the discovery of
I could never get my head around quantum properties of large objects like refrigerators, cars, and black
holes. I always assumed quantum properties only applied to tiny, low-energy things. Oh well …
Anyway, the highlight of “Fuzz or Fire” in my opinion was a video conference between the workshop
attendees and the indomitable Stephen W. Hawking, who was in England at the time.24 Hawking gave
a prepared talk, where he basically admitted he had been wrong about black holes for the past four
decades. Hawking is a man of extreme courage and character, and he should be commended for his
honesty and integrity in owning up to his mistake.25 In his talk, he said that the classical, infinitely
smooth, and razor-thin Event Horizon according to Orthodox Holology is a fiction, and it should be
replaced by an “apparent horizon” that's extremely turbulent, chaotic, and deterministic. He compared
information hiding behind a chaotic, turbulent apparent horizon to the problem of weather forecasting.
After Hawking was finished, the microphone was passed around for the audience to ask him questions.
Of course, Hawking had to submit his answers at a later date because he is unable to “talk” in real time
through his computer. Lenny Susskind asked Hawking if he could point out where mistakes were made
in the AMPS paper. I'm convinced the the biggest mistake was assuming an Event Horizon exists in
the first place. The EH is the source of all the trouble, leading to an endless series of abstract
workarounds in order to rescue it, similar to the epicycles that were needed to save the Ptolemaic
theory in the face of mounting astronomical observations that contradicted it.
In January 2014, Hawking followed up his talk with a rather short paper entitled “Information
Preservation and Weather Forecasting for Black Holes.”26
“Thus, like weather forecasting on Earth, information will effectively be lost, although there would be no loss of
unitarity.”
Note he said information is “effectively lost” but not actually lost, which warms the cockles of my
heart. Deterministic, chaotic systems may appear quite similar to indeterminate, stochastic systems,
but the two things are fundamentally very different. The interiors and surfaces of the ECOs that Abhas
Mitra describe are very chaotic, just as the interiors and surfaces of ordinary stars, white dwarfs and
neutron stars are very chaotic. No information is lost inside those objects either.

23 Two of these individuals, Marolf and Polchinski, were responsible for the AMPS firewall paradox that stirred up so
much controversy in the theoretical physics community in the past several years. The other names are equally famous.
Santa Barbara is a lovely spot and I wish I could have been there.
24 Hawking was unable to fly to LA to attend the workshop because of his deteriorating health. A very low-quality video
25 After all, it was the publication of his information paradox in 1974 that sent the theoretical physics community into its
current tailspin.
26 It is devoid of mathematical obfuscation and very easy to read. Here's the link: https://arxiv.org/pdf/1401.5761v1.pdf

11
Appendix E – Twinkle, Twinkle, ECO
In astrophysics, it's important to know how objects are formed and evolve in the physical sense, and not
just because mathematical equations say they can. Arthur Eddington and Subrahmanyan
Chandrasekhar were two old-school astrophysicists who worried about physical details.27 There is a
long, twisted, and rather sad tale about the relationship between those two men, which I won't go into
here.28 But I will describe the discoveries they made about stellar evolution by actually observing stars.
An ordinary star “burns” by converting light elements into heavier ones by thermonuclear fusion
processes. The light elements are “fuel” and the heavier ones are the “ashes.” When a star runs out of
fuel, it dies. The way it dies depends mainly upon its total mass. Stars that are about as heavy as our
Sun, die a relatively peaceful death. Its main fuel is hydrogen, which turns into helium. When the
hydrogen runs out, the Sun will then swell into red giant star, blowing off mass from its cooler outer
layers into space while its helium core contracts and gets hotter. At some point, the helium ignites as
fuel, turning 40% of the Sun's mass into carbon in a matter of minutes. The core will continue to
contract, getting hotter and burning through its supply of helium. Over the next 20 million years or so,
the Sun will be unstable and blow off more of its mass in a series of thermal pulses. About 50% of the
Sun's mass will be lost, exposing a naked core. The final state is a white dwarf – a very hot ultra-
compact core made of mostly carbon with a bit of unburned helium.
Chandrasekhar worked on the details of stellar collapse and discovered a white dwarf has a maximum
mass; it's about 1.4 times the solar mass, or 1.4 M☉. The reason for this involves quantum mechanics
and relativity and I won't go into all the gory details other than to say that when M > 1.4 M☉, called the
Chandrasekhar limit, electrons orbiting the atomic nuclei inside a white dwarf are required to move
faster than the speed of light in order to resist the pressure exerted by gravity.29
After the Chandrasekhar limit was discovered, it wasn't clear what would happen if a very large star
couldn't manage to shed enough mass to get below this limit when becoming a white dwarf. It was
later determined that at extreme pressures, electrons are driven inside protons, releasing neutrinos and
making a ball of neutrons that forms a neutron star. A neutron star that is somewhat heavier than the
Chandrasekhar limit is able to generate enough internal pressure to resist further gravitational
compression. Stars several times heavier than the Sun will die in giant supernova explosions that blast
most of the star's mass away, leaving a neutron star behind. But there is a maximum size of a neutron
star, called the Landau–Oppenheimer–Volkoff (LOV) limit. A ball of neutrons is only able to resist
gravitational pressure up to the LOV limit, which is somewhere around 3.0 M☉.
The LOV limit certainly doesn't leave much headroom above the Chandrasekhar limit, so the natural
question is what happens if a neutron star happens to be heavier than the LOV limit. The answer nearly
everyone accepted is that it turns into a black hole because no other explanations were available. The
problem with that conclusion is that it leads to violations of the baryon and lepton conservation laws.
Ordinary matter is made up of baryons and leptons. The heavyweight baryons were once considered
fundamental particles but we know now they are composed of three quarks. There are 40 different
baryons and they're not going to be listed here because all we really have to worry about are protons

27 For some reason, theoretical physicists stopped worrying about those things around the middle of the 20 th century.
in the Quest for Black Holes.
29 Remember that gravity provides an unlimited supply of pressure. Miller's book discloses that when Chandrasekhar
discovered the limit, Eddington at first tried to discredit him and then later tried to claim credit for this discovery.

12
and neutrons. The lightweight leptons include the electron and the neutrinos. Antiparticles count as
having negative baryon or lepton numbers. For example, when a neutron decays into a proton plus an
electron and an anti-neutrino, the baryon number starts out as one and ends up as one. The lepton
number starts out as zero and ends up zero because the electron and the anti-neutrino cancel each other
lepton-number-wise. This limits the kinds of elementary particle interactions that are allowed to occur,
and it should also limit the kinds of objects a neutron star can evolve into since its lepton number is 0.
According to the rather primitive theory of black holes prior to 1974, a star collapses into a white
dwarf, then a neutron star before finally becoming a black hole. The baryon number would be the same
as it was just before it became a black hole, and according to the baryon accounting system, that
number would be frozen behind the Event Horizon for eternity. With the advent of Hawking radiation,
that all changed since Hawking radiation can emit virtually anything, including leptons, which
shouldn't emitted at all when the lepton number is zero, and mesons, which are neither baryons nor
leptons. It could be argued that the number of emitted leptons and anti-leptons would be statistically
about equal, bringing the total number of emitted leptons close to zero, but “close” is not good enough
in this accounting system; the numbers must match exactly. When the black hole completely
evaporates as Hawking radiation, we would likely find that both the baryon and lepton conservation
laws are violated. If I'm not mistaken, that would be as bad or worse than the information paradox.30
On the other hand, an ECO doesn't have an Event Horizon, nor does it have Hawking radiation. So
whatever material is ejected comes from inside the ECO. Internal temperatures would be ridiculously
high, and there may be particle transmogrification events occurring inside that physicists don't even
know about yet; but regardless of what kinds of crazy subatomic particles are created, an ECO would
still conserve both the baryon number and the lepton number in the material it spews forth.
The gravitational effects of an ECO would be similar to, but not quite the same as, those associated
with a black hole. There would be no event horizon and no singularities, but there would be a huge
gravitational red shift at the “surface” or “photosphere” of the ECO. In fact, a distant observer might
see a very cold surface emitting very little radiation. In that state, the ECO would almost appear as a
static adiabatic system with a constant internal energy. Even in the local frame of reference, outward
radiation pressure would be extremely strong, balancing gravitation and making further contraction
proceed extremely slowly, although Mitra predicts that ECOs undergo periodic violent eruptions.
The end state of the system would be a zero-mass black hole, but it could only physically achieve that
end state by radiating away all of its mass-energy. In that regard, the ECO resembles a black hole
emitting Hawking radiation. But there is one very important difference. In Appendix D, I mentioned
that Stephen Hawking now believes the “apparent horizon” of a black hole is a turbulent, chaotic place.
Chaos is deterministic, and Hawking knows this. Thus the radiation that bears his name should no
longer be thought of as being a stochastic quantum process, but as a mundane classical one instead.
The time it takes for and ECO to complete its collapse is … well … an eternity. Mitra calculated this
time based on conservation of baryon number in his paper “Cosmological Properties of Eternally
Collapsing Objects (ECOs)” and he showed this time is literally infinite.
I suspect a neutron star may emerge from an ECO within a finite time once its mass gets below the
LOV limit. I base this hunch on the fact that the transition from a neutron star into an ECO plasma ball
should be reversible for the most part, provided the baryon and lepton numbers are conserved.

30 I'm sure that modern physicists could come up with convincing arguments based on string theory and AdS/CFT as to
why it's perfectly okay that black holes violate the baryon and lepton conservation laws. I'm also quite sure those
arguments would sail right over my head because I can only think in simple engineering terms.

13
Appendix F – Forensic Examination of Black Hole Theory
I hope I've managed to convince my readers that true black holes aren't real, or at least they are not
physical objects. So why were so many astrophysicists, theoretical physicists, and the lay public led to
believe in them? What went wrong? In this appendix, I'll try find out by performing a forensic
examination of sorts about black hole theory (BHT). First it should be stressed that BHT is a separate
theory in its own right and not an inevitable outcome of the general theory of relativity (GR).
In his book Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth,
author Jim Baggot points out that there really is no formula that produces a scientific theory. As
strange as it seems, making a scientific theory is more of an art than a science. A good theory should
not only explain phenomena that already have been observed, but it should make predictions that can
be tested and which no other theory has made before. When Albert Einstein presented his general
theory of relativity, the precession of Mercury's orbit was already well-known. In fact, there was a
theory that explained this based on a planet near the Sun, named Vulcan,31 which perturbed Mercury's
orbit based on Newton's laws. So the prediction was the planet Vulcan, but when no such planet could
be found, the theory collapsed. On the other hand, general relativity explained Mercury's precession
quite accurately without any extra planets. However, that wasn't enough for physicists to accept the
new theory. It took another prediction that nobody had previously observed, namely the precise angle
that a light ray would be deflected by the Sun, to convince people that Einstein was really on to
something big. That prediction was confirmed by Arthur Eddington during a 1919 solar eclipse.
The thing to keep in mind is that scientists make theories, scientists are people, and people are fallible.
Thus, theories can appear ironclad for very long time to everyone, but are later discovered to be way
off the mark.32 To begin our investigation, let's lay out a time line:
• October 1915 – Einstein published his GR field equations with a glaring mistake.
• November 1915 – Einstein presented his corrected GR field equations to the Prussian Academy
of Science.
• December 1915 – The physicists/astronomer Karl Schwarzschild worked out exact solutions of
the GR field equations for symmetrical spherical objects and forwarded them to Einstein in a
letter he sent from the WWI trenches. It was soon realized that a funny thing happens if the
gravitating mass is contained within a Schwarzschild radius: an inexplicable singularity at rs.
• 1924 – Eddington mathematically removed the singularity at rs by changing coordinates.33
• 1929 – Eddington proposed nuclear fusion as the source of energy of the stars, and Robert
Atkinson and Fritz Houtermans refined the fusion theory.34
• 1931 – By incorporating relativity, Subrahmanyan Chandrasekhar calculated that a white dwarf
star could not be heavier than 1.4 M☉. Initially, his calculations were rejected because they
suggested that stars heavier than this limit would have to turn into black holes.35

31 No, that is not the home planet of Mr. Spock in “Star Trek.”
32 The Ptolemaic model lasted for centuries, with some help from the Roman Catholic Church of course.
33 In 1933 Georges Lemaître determined Eddington's approach yielded an “unphysical singularity.” Einstein and Rosen
used a similar mathematical trick in 1935 to remove singularities by using a non-physical distance, u = ± √ r – rs.
34 It should be noted that nobody envisioned using fusion to make a hydrogen bomb at that point. That nightmare would
be realized after WWII.
35 I think it's interesting that in 1931 physicists were still willing to reject mathematical results that were considered non-
physical or absurd. That changed in 1932 when a positively-charged electron was observed that confirmed a previous
mathematical prediction by Paul Dirac. Since then, physicists accept mathematical results as being physically real no
matter how nonsensical they may seem or how contradictory they actually are.

14
• 1934 – By this time, the existence of the neutron was firmly established on theoretical grounds.
Walter Baade and Fritz Zwicky proposed the existence of neutron stars.
• 1939 – Robert Oppenheimer and George Volkoff calculated an upper limit for the mass of a
neutron star as 3 M☉. Interest in black holes was renewed.
• 1939 – In May, Einstein published a paper where he considered building a black hole from
infalling “dust.” He concluded that even if an Event Horizon formed, the dust particles couldn't
get inside a radius r = rs (2 + √3). In July, Oppenheimer and Snyder published a paper that
examined what would happen when a star runs out of fuel. To make things simple, they
assumed the star couldn't generate any internal pressure, which made it essentially a dust ball.
They concluded it might turn into a black hole but it would take a really, really long time.36
• 1940 to Present – Observations revealed very heavy compact astronomical objects – much
heavier than any neutron star could ever be. BHT started taking hold while some of its
predictions, like gravitational lensing, were confirmed by observation.
In summary, from 1915 to about 1931, black holes were pretty much considered to be non-physical
mathematical curiosities. After 1931, they became a possibility, and since around 1970 BHT became a
virtual certainty for any astronomers or theoretical physicists having a shred of scientific street cred.37
Unfortunately, I'm convinced the overwhelming acceptance of BHT is based on a very common fallacy
known as the appeal to ignorance. Kip Thorne, Carl Sagan, Stephen Hawking et al are/were truly
brilliant men, so I'm not implying they are/were ignorant. “Appeal to ignorance” is a term used in
philosophy to describe situations where people accept an answer to a question simply because they
don't have any other answers. When a star uses up its fuel and begins to collapse, it becomes a neutron
star, being the most compact object that can exist (based on current understanding of physics). The
largest possible mass of a neutron star being 3 M☉, the natural question is what happens if the mass
exceeds that? Well, the only answer anyone can think of is that it turns into a black hole. Thus, BHT
became the theory of humongous things (ToHT) by default.
Like other theories, BHT makes predictions and some of them turned out to be true; but unfortunately,
others turned out to be contradictions. Now when that happens, it doesn't necessarily mean you have to
throw out the entire theory. Sometimes, you can keep the theory and fix things by ruling out the
assumptions that resulted in those contradictions. When Hawking discovered the information paradox
in 1975, at least seven resolutions were proposed. In 1997, something called AdS/CFT duality came
along, based on string theory of course. AdS refers to anti de Sitter space, invented by Willem de Sitter
a long time ago. AdS space is a toy universe with as many (or as few) time and space dimensions as
you want. CFT refers to conformal field theory, which maps what goes on in n-dimensional space onto
an (n –1)-dimensional surface surrounding it. The cool thing is that you can use AdS/CFT to describe
just about anything, including the insides of black holes. Somehow AdS/CFT solves the black hole
information paradox38 and it became all the rage in theoretical physics.
Unfortunately, AdS/CFT led to the Mother of All Paradoxes, the AMPS firewall. At this stage, I think
it should be fairly obvious (it is to Hawking at least) that the easiest and perhaps only way to resolve all
of the black hole paradoxes is to let go of the BHT, or at least the part about the Event Horizon.

36 Abhas Mitra has shown that zero-pressure “dust” particles must be zero-volume point particles, and that GR doesn't
allow point particles to exist. Also, he shows that one of the formulas in the Oppenheimer-Snyder paper actually shows
the time to form a black hole is infinity. This conclusion escaped the original authors but it agrees with the ECO theory.
37 Denying the existence of black holes won't get you burned at the stake like those in medieval times who denied the
Earth was the center of the universe. But doing that will certainly terminate one's NSF grants in theoretical physics.
38 Just don't ask me how.

15
Up till now, this essay has described a neutron star as big ball of neutrons, but that's an over-simplified
picture. For starters, all observed neutron star candidates spin like crazy, with a typical rotation of
1,000 revolutions per second, and I would think this would flatten a neutron star in to a highly-
elongated ellipsoid or maybe even a pancake shape. Second, non-solid objects in nature such as
clouds and stars don't have sharp edges. The visible edge of the Sun is the photosphere, but the Sun
doesn't end there. It's simply that light escapes from there when the scattering length of the light waves
approaches the radius of the scattering medium. The same thing is true for clouds. They don't have
sharp edges; their moisture thins out very gradually. So neutron stars probably don't have sharp edges
either. Third, the interior of a neutron star is probably far more complex than just a ball of neutrons,
having layers like the cutaway of a neutron star spinning rapidly around the vertical axis shown below.

The outer layer is probably very much like the composition of an ordinary star – an ionized gas. Below
that layer is a layer that resembles a white dwarf, with electrons buzzing around near the speed of light.
Below that is a layer of neutrons, which were created from electron capture by protons according to this
reaction: p + e– → n + νe, where νe is an electron neutrino that escapes into space and carries off the
electron's lepton number, plus some residual protons and electrons. At the center of a neutron star is
what some physicists believe may be a plasma of quarks plus gluons.
The universe tends to go to the lowest possible energy state. Considering only nuclear physics, this
would result in a universe made of iron, since lighter elements would tend to fuse into heavier elements
and heavy elements would tend to fission into lighter elements until they meet in the middle as iron,
having the lowest possible nuclear energy. When gravity is added to the mix, it has unlimited negative
energy. So under gravitational contraction, the matter causing the gravitation must be elevated to states
having higher energies in order to balance the negative energy of gravity, each descending layer of a
neutron star containing matter excited to higher and higher energy states. Note however, there are no
sharp dividing lines between the layers – each one gradually blends into the layers next to it.
Until fairly recently, neutrons were considered fundamental particles. In the current standard model of
particle physics, a neutron is made of one “up” quark with an electric charge of +2/3 with two “down”
quarks each with an electric charge of – 1/3, all held together by the strong nuclear force. Ordinarily,
the strong force makes it impossible to separate the three quarks, but with a high enough energy, the
distances between them might stretch a lot, forming a plasma of electrically-charged quarks and gluons,
which are quanta of the strong force that come in various “colors.” This would be the highest energy
state of matter that we know of, but bear in mind that we don't know everything yet.

16
Appendix G – The Destroyer of Worlds
As J. Robert Oppenheimer watched the first atomic mushroom cloud ascend over Alamogordo, NM at
5:29:45 on July 16, 1945, he recalled a verse from the Bhagavad-Gita: "Now I am become Death, the
destroyer of worlds." There were reports of some side bets among the physicists at the Trinity test site
as to whether their “gadget” would work, and if it did whether it would ignite the atmosphere and
destroy the Earth.39 Edward Teller, whom I always considered to be some sort of mad scientist, did
some calculations showing it really could ignite the atmosphere. Luckily, it turned out that the verse
from the Bhagavad-Gita didn't come true that morning or for any of the subsequent nuclear tests.
The topic of black holes kind of reminded me of the Alamogordo test because a similar world-ending
scenario emerged as preparations were being made to fire up the Large Hadron Collider (LHC) at the
CERN research facility on the French-Swiss border. This giant machine was designed to generate
beams of protons that collide with an energy of 14 TeV per proton pair. Although the initial runs had
only about half that energy, some feared it would produce black holes that could swallow up the Earth.
A series of alarming headlines flashed across the news media. As late as July, 2016 headlines like these
were still appearing: “Will Large Hadron Collider destroy Earth? CERN admits experiments
could create black holes. One way or another the huge Large Hadron Collider is going to 'finish
off the planet,' according to conspiracy theorists.”
CERN scientists weren't being exactly reassuring on this matter either.40 They said yes, theoretically a
mini black hole could swallow up the Earth, but the probability of one of them going rogue and really
doing this is very, very low. Now when doing risk assessments, risk is defined as the probability, p, of
an event occurring multiplied by its negative consequences. In this case, the negative consequences are
infinite, so risk = p ´ ¥ = ¥ as long as p > 0. Although most of the general public have a very weak
handle on probabilities,41 they can still intuitively grasp the concept of risk. They have a strong gut
feeling that even if the probability of a black hole destroying the world is very, very low, it would still
be a very bad idea to roll the dice and take the chance that it might happen.
CERN scientists were much more sanguine about creating mini black holes, because they believe it will
indicate the existence of extra dimensions and parallel universes. That's right, the headline of an article
on PhysDotOrg on March 18, 2015 announced: “Detection of mini black holes at the LHC could
indicate parallel universes in extra dimensions.” That bold prediction is still reverberating around
scientific news outlets as yet another example of black holes “indicating” (although never testing) some
new twist to string theory. To date, no black holes have been detected coming out of the LHC.
I'm not worried too much about an LHC black hole destroying the world. First, true black holes – mini
or maxi – aren't real. Second, even if a true mini black hole did pop up, it would radiate away
harmlessly in a matter of seconds. Third, the minimum mass a black hole can have is the Planck mass,
which has an energy equivalent of about 1.2 ´ 10 16 TeV or almost 1015 times the energy that the LHC
can deliver per collision. And if you're wondering why I'm so obsessed about the unreality of black
holes, it's because engineering is applied physics, and physicists produce the technical tools engineers
need to do their jobs. As a retired engineer, I want physics to be about things that are real.

39 Actually, it's always safe to bet against the world coming to an end because nobody could come around to collect their
money from you if you lose. I'm surprised that none of the brilliant scientists who bet the world would end on 07/16/45
40 In fact, CERN scientists were secretly hoping they would create mini black holes, world destroyers or not.
41 This is demonstrated by the \$Billions the general public hand over to gambling casino operators every year.

17
Appendix H – DANGER: Square Roots Ahead!
When I was a practicing electrical engineer, I always got a little nervous when a problem had a solution
involving square roots. That means the solution has two answers, and if they are real numbers,
generally one of them is wrong so you have to figure out which one is wrong and throw it away. Even
worse, sometimes both answers ended up being complex numbers, so you have to accept both of them
(or throw both of them away). Fortunately, electrical engineers are quite comfortable with complex
numbers, and in fact we couldn't do our jobs without them.42 On the other hand, mechanical,
aeronautical and civil engineers generally try to stick to real numbers. Those other engineering types
regarded us “electricals” as kind of weird because we work with imaginary voltages and currents as if
they are real, whereas non-electricals think of them as being … well … not real.43
I was pondering black holes the other day and trying to come up with a simple analogy based on
geometry. It occurred to me that cramming more and more matter inside a fixed-volume sphere until it
turns into a black hole seems like cramming more and more volume into an ellipsoid with a fixed
surface area. Unfortunately, while it's easy to calculate the volume of an ellipsoid (it's V = 1/6 π a2 b,
where “a” is the short dimension and “b” is the long dimension), it's very hard to calculate its surface
area. So I came up with a much simpler analogy; i.e., cramming more and more area into a rectangle
with a variable length and a fixed perimeter size.
Consider a rectangle having a fixed perimeter 2a + 2b = 4. You wind up with a formula for one of the
sides, a, expressed as a function of area, A, of the rectangle: a = 1 ± √ 1 – A . Uh oh, there's that square
root sign again. Does it mean “a” actually has two different values? Well yes, sort of, but it turns out
that a = 1 – √ 1 – A is the the width of the rectangle, whereas a = 1 + √ 1 – A actually equals its length, b,
so a rectangle with a fixed perimeter really only has one set of “a” and “b” values for a given area.
Here's what happens when varying the area inside a rectangle with a perimeter 2a + 2b = 4:

With A = 0, the rectangle is just a straight line with a length b = 2. Increasing the area to A = 1 forms a
square with all sides having a length of one. But what happens when A > 1? Well, according to the
formulas for the “a” and “b” dimensions, the four sides are complex numbers that still add up to 4, and
although the complex values for the width and length are unequal, their magnitudes are equal; e.g.,
when A = 2, | a | = | b | = √2 . I think this is like creating a black hole; i.e., trying to fit too much area
inside a 2-dimensional rectangle is like cramming too much mass inside a 3-dimensional sphere. You
might have an equation that's sensible up to a point, but it generates only non-physical results after that.

42 Charles Steinmetz was a Prussian-born engineering genius who introduced electrical engineers to the world of complex
numbers through the use of “phasors.” They are not the same as the directed-energy weapons called “phasers” used in
“Star Trek.”
43 “Imaginary” is an unfortunate word choice. In fact, you will wind up really dead if you're electrocuted with imaginary
current. I'd rather use the term “orthogonal” instead of “imaginary.”

18
The plots below show the real and imaginary parts of the “a” and “b” dimensions of a rectangle with a
perimeter 2a + 2b = 4 as a function of the area contained inside the perimeter.

I discovered my “black hole analog” even has a singularity. The derivatives of “a” and “b” with respect
to A are calculated below, which move the square roots into the denominators.
da/dA = ½ / √ 1 – A A<1; da/dA = – ½ / √ 1 – A A>1
db/dA = – ½ / √ 1 – A A < 1 ; db/dA = ½ / √ 1 – A A>1
As A → 1, the derivatives blow up into ± ¥ or ± i ¥ . It's those darned square roots acting up again.
This exercise may be apropos of nothing, but it was kind of fun. I think it does illustrate an important
point, though. It's possible to develop sets of equations that are 100% mathematically correct but don't
represent anything physically real when certain values are plugged into them. Black holes are like that.
Einstein and Rosen created a similar thing with their “ER bridge.” In order to get rid of annoying
singularities, they used a new variable, u, which is utterly non-physical. But when people looked at
those equations, they saw physical wormholes and tried to create theories based on them.

19
Appendix I – The Beast That Lurks Within
Galaxies are strange objects. They come in many different shapes and sizes, but there are three main
types: disk, barred disk, elliptical, and irregular. In 1933, Fritz Zwicky, a Swiss-American astronomer,
discovered that rotating galaxies seemed to defy the law of gravity, so he concluded that there had to be
missing mass – a lot of it – in order to explain this anomaly. Lately, physicists have come up with all
sorts of exotic theories to account for the missing mass, including extra dimensions purported by string
theory and a modified version of gravity at low rates of acceleration (the MOND44 hypothesis). But the
most popular theory involves a mysterious substance called “dark matter” that has some rather unusual
properties; namely that three out of the four fundamental forces of nature (strong nuclear, weak nuclear
and electromagnetic) have no effect whatsoever on this stuff – it's affected by gravity alone.
Most astronomers believe a huge ball, or halo, of this mysterious dark matter envelopes our own Milky
Way galaxy, and that all galaxies share this feature in common.45 Unfortunately, particle physicists
aren't really on board with the idea of dark matter because they are very proud of their standard model
of particle physics, which is pretty much complete as is and has been verified by experiments at the
Large Hadron Collider and other places. It would be a major pain in the rear for them to go back to the
drawing board and invent a whole new menagerie of particles, antiparticles and force carriers that don't
fit into their current standard model.
In Appendix J of my essay Is Science Solving the Reality Riddle? I concluded that a self-gravitating
system of particles can form a stable cloud or halo only if the particles produce an internal pressure
through particle-particle collisions, as in a gas. If particles of “dark matter” only interact through
gravity, they are are incapable of doing that. In Appendix S of the same essay, I calculated the
distribution of mass in a self-gravitating spherical cloud of an ideal gas using Newton's law of
gravitation and a little calculus. What I found was that the density of the cloud drops off very rapidly
from its peak value near the center, then it trails off very gradually the farther away from the center you
go. Because incremental volume increases with the square of the radius, the cumulative mass of the
thinning cloud continues to increase almost linearly over huge distances from the center, which results
in orbital velocities of stars in the disk that closely match the “anomalous” orbital velocities Zwicky
and others have observed in disk galaxies over the years and are now being attributed to “dark matter.”
I concluded that ordinary hydrogen molecules near absolute zero temperatures might be the true answer
to the mystery of missing mass.
The fact that 90% of mass in the Milky Way might be “missing” raises the question of how so much
hydrogen could escape detection. Well, assuming that the effective radius of a spherical halo weighing
a trillion solar masses extends about two times the radius of the visible disk, you can calculate an
average density of about 0.16 hydrogen molecule per cubic centimeter inside the halo. The density
profile reveals that the mean density is about 7% of the peak density at the center.46
According to the Internet,47 the density of hydrogen in interstellar space48 is about 1 atom per cubic
centimeter. According to the chemistry courses I took, hydrogen gas normally comes in molecules

## 44 MOND stands for Modified Newtonian Dynamics.

45 At least astronomers will say they believe that if they value their National Science Foundation research grants and/or
are seeking highly-prized tenured faculty positions at major universities.
46 The average density is based on the halo's total mass of about 1 trillion solar masses, divided by the volume of a sphere
extending to an effective radius where the density drops to 2% of the peak value at the center.
47 Take it from me – you can always trust everything you find on the Internet.
48 I'm assuming the region midway between the Sun and the nearest star system, Alpha Centauri, represents that region.

20
consisting of two hydrogen atoms, and I see no reason why that wouldn't be true in interstellar space as
well. So I'm going to convert one atom per cubic centimeter into 0.5 molecule per cubic centimeter. It
needs to be stressed that there's a lot of other “stuff” in interstellar space besides hydrogen, including
many types of heavy molecules,49 dust, ice, and chunks of rock of various sizes – interstellar junk. But
for the sake of comparison, I'm only going to look at hydrogen molecules, which are compressed by
gravitational attraction toward the halo's center of mass according to the ideal gas law.
Another interesting number I peeled off the Internet is the density of hydrogen in intergalactic space.
Experts guesstimate that figure to be around one atom per cubic meter, or 0.000001 atom per cubic
centimeter. The following table summarizes the above results.

Hydrogen Density
Region (H2 Molecule / cm3)
Interstellar Space (Observed) 0.5
Milky Way Halo – Near Center 2.4
Milky Way Halo – Mean Density 0.16
Milky Way Halo – Outer Edge ~ 0.048 (2% of peak)
Intergalactic Space (Estimated) < 0.000001

The relative density values inside the halo were computed using Newton's law of gravitation and the
ideal gas law.50 Based on an estimate of our location being 2.5 ´ 1017 km from the center of the halo,
the calculated interstellar density near us is 0.96 molecules/cm3, which isn't far off from the observed
value of 0.5, given the uncertainty concerning the halo's real size. You can see that the density at the
“outer edge” of the Milky Way halo has a long way to go before it drops to the value for intergalactic
space. When I calculated the density profile, I assumed the halo behaves like a perfect gas at a uniform
temperature throughout the halo, which isn't 100% true. Furthermore, I ignored the fact that the mean
free path between molecular collisions is extremely large near the edge; therefore, some molecules
simply escape from the edge instead of being scattered back toward the center. This would cause the
density to drop more rapidly than I calculated, approaching the density of intergalactic space.
If the hydrogen molecules were in thermal equilibrium with the rest of the universe, they would be a
very chilly 2.7°K (– 454.8°F).51 It's not hard for me to imagine that they could escape detection by
astronomers, especially since they are trying to find it by peering through the relatively dense, hot
region of interstellar junk all around us. In my opinion, “dark matter” in the Milky Way is nothing
more than a huge, very rarefied and extremely frigid cloud of molecular hydrogen surrounding the
visible disk of stars, planets and interstellar junk where we live.
Okay that's very nice, but what does it have to do with black holes? Well, at the center of the Milky
Way galaxy lurks a monster weighing in at one million solar masses: The Beast That Lurks Within.
Most astronomers think the Beast is a black hole, and the reason they know something really big lurks
49 Some of these are large organic molecules, including amino acids that are precursors to proteins in living things.
50 Refer to the charts in Appendix S of my essay Is Science Solving the Reality Riddle?
51 Some astronomers claim that the hydrogen atoms surrounding our galaxy are excited to energies corresponding to
temperatures of millions of degrees. I find that hard to believe. What's the source of energy for that? I think some of
the atoms (the ones astronomers can actually detect) may be excited to very high energies from collisions with cosmic
rays, gamma ray photons, and other kinds of ultra-high-energy projectiles, but I strongly believe average temperatures
are very low, rendering most intergalactic hydrogen utterly invisible.

21
in there is because stars near the center of the galaxy are moving very fast, meaning they're clearly
orbiting around some large, massive, unseen object. Conventional wisdom says that an unseen object
weighing a million solar masses can't be anything besides a black hole, and other galaxies are believed
to contain similar massive objects at their centers.
One million solar masses seems like a lot of mass, but I need to point out that this is a tiny fraction of
the total masses of all the stars orbiting around the center of the Milky Way. It would be impossible for
billions of individual stars to maintain stable orbits if the only thing holding them in their orbits were
the gravitational attractions from this one object and each other. Stars are clumpy objects that gravitate
toward other masses, but they do not produce internal pressure like a gas, so stars alone can't form
stable orbiting structures. That is why it is absolutely imperative that the Milky Way have an enormous
halo of gaseous material to anchor and stabilize its huge array of orbiting stars.
So how did this giant “black hole” get there? If you look way out at the early universe when galaxies
first formed, you'll see a lot of extremely bright objects known as quasi-stellar objects or quasars for
short. Quasars were as luminous as entire galaxies are today, but they were relatively small, on the
order of 1.5 light years in diameter.52 So what were those things? I believe a quasar was the central
core of a newly-formed galactic halo, where the gas density reached a tipping point that led to a rapid
collapse into an ECO. But a quasar was no run-of-the-mill ECO; it was The Mother of All ECOs.
Since an ECO doesn't (and never will) have an event horizon, the collapse is violent and visible for
everyone to see. Millions of solar masses would be especially violent, a roiling mass of who knows
what – maybe a quark-gluon plasma – which can easily outshine anything else Nature can dream up.
But remember that an ECO will try to become a black hole, complete with an event horizon, over an
infinitely long period of time. As its forlorn journey toward that unattainable state continues, no true
event horizon forms, but an escalating gravitational red shift makes the ECO appear to a distant
observer as if it actually has achieved black hole status. In other words, an elderly ECO looks pretty
much like a black hole from a faraway distance. I believe the central “black hole” in the Milky Way –
The Beast That Lurks Within – was once a quasar that has settled down as a docile ECO. Surrounding
itself in a cocoon of relativistic gravitation that slows time down to almost a standstill, the Beast may
look calm and peaceful, but it would be anything but calm and peaceful if you got close to it.53
In the quasar stage of a galaxy, the central region of the halo would be hollowed out from intense
radiation pressure beaming from the quasar. Being made of gas, a stable hollowed-out structure would
be possible. When the quasar reaches its “quiet phase,” gas would collapse inward again and the halo
settles into a static configuration. Other passing galaxies may occasionally disturb the shape of the
halo, but like a soap bubble, it would then settle back to that same configuration. A rapidly-spinning
halo would take on a the shape of an ellipsoid instead of a sphere, but in any case, it would anchor stars
orbiting within it to form a structure held together by gravity and stabilized by internal pressure.
Stars forming in a rapidly-spinning galaxy are clumpy objects attracted to other clumpy objects, so the
stars tend to collect in the central plane of rotation, forming a disk. Over time, interstellar junk
disgorged from stars accumulates in the disk. A thin, but very massive, halo of primordial hydrogen
encloses the disk like a womb. An elliptical galaxy doesn't spin as fast as a disk galaxy, so its stars
forming inside the halos will gravitate toward the center of mass and revolve in highly-elongated orbits
in random directions, making an elliptical galaxy resemble a swarm of bees. I'm not sure if elliptical
galaxies also have massive ECOs in their centers, but they probably do.
52 This is much bigger than stars, but much, much smaller than a galaxies, which is why they're called “quasi-stellar.”
53 I'm hoping Planet Earth will never get up close and personal to one of those things.

22
Appendix J – The Case of the Disappearing Mass

In the beginning of this essay, I briefly mentioned Hawking-Bekenstein radiation. This occurs because
particle-antiparticle pairs are ripped apart at a black hole’s event horizon, and some popular-science
writings attempt to explain why black holes lose mass by claiming that only the antiparticles fall into a
black hole and antiparticles have negative mass. Of course this is utter nonsense for two reasons: 1) all
antiparticles have positive mass, just like regular particles, and 2) there is no reason why only the
antiparticles would fall into the black hole in the first place. Therefore, the black hole must lose mass
regardless of whether the particle or its antiparticle partner falls into it. But then why wouldn’t a
refrigerator, a Volkswagen, or an entire planet falling into it do the same thing? And if everything
falling through the event horizon decreases the black hole’s mass (thus making the event horizon
smaller), then how in the world could an event horizon form in the first place?
I decided to do my own little gedanken experiment to try and find out how much negative gravitational
energy surrounds a black hole’s event horizon and how much negative mass this would be equivalent
to. I imagined trying to enlarge a preexisting black hole of M by lowering a weight of mass m onto the
event horizon from infinitely far away using a rope and pulley system depicted below.

As the weight is lowered through a distance, – dr, work is done on the pulley and energy, dE, escapes
into the surrounding universe. The energy escaping from the system is equal to negative gravitational
energy that develops as the weight approaches the black hole, and the escaped energy is equivalent to a
decrease in mass. How much mass is lost if the weight starts out at infinity – where the gravitational
energy is zero – and it is lowered down to the radius of the black hole’s event horizon, rs?
The tension in the rope can be calculated from the proper acceleration of a stationary test mass derived
from the exterior Schwarzschild metric. The tension equals m times the proper acceleration:
T = m∙ | d 2 r / dτ 2 | = m∙M∙G / (r2 Ö 1 – rs / r )
The incremental change in the energy of the (M + m) system is dE = T dr. The total change in the
system’s energy, ΔE, is found by integrating dE = T dr :
r2
ΔE =  r1
[ m∙M∙G / (r2 Ö 1 – rs / r ) ] dr

23
When the above integral solved over the range r1 = ¥ to r2 = rs the result turns out to be quite simple:
ΔE = – m (2∙M∙G / rs)
The change in the black hole’s mass, ΔM, is equal to m plus ΔE / c 2:
ΔM = m + ΔE / c 2 = m – m (2∙M∙G / c 2∙ rs ) = m – m (rs / rs) = m – m = 0
In other words, when weights are “added” to a black hole via this pulley system, the black hole’s mass
doesn’t increase at all!54 Matter isn’t physically destroyed; it’s just canceled by negative energy.
This simple exercise reveals a fundamental fact, which is the key to understanding the fallacy of black
holes: The cancellation of an object’s effective mass due to negative gravitational energy must be
taken into account in order to determine the true size of its Schwarzschild radius. For planets and
ordinary stars, the amount of negative gravitational energy is negligible compared to the amount of
ordinary “baryonic matter” in those objects. But strictly speaking, the space surrounding a spherical
mass is not “empty” because it contains negative gravitational energy, and this is as much a part of the
object as the material inside its radius. This is exactly what Abhas Mitra’s analysis determined: The
quantity M isn’t a free parameter you can just stick into an equation. Negative gravitational energy is
equivalent to negative mass, so M is a variable M(R) that decreases as the radius R of a spherical mass
approaches rs ; the only possible value of M(rs) for a hypothetical black hole is zero!
Stephen Crothers (mentioned in Footnote 13 above) is considered to be somewhat of a loose canon in
the scientific establishment, mainly because he denies the existence of black holes.55 You can watch a
gleaned from the video is that an exact solution of Einstein’s field equations must include the entire
universe, whereas calculations of the standard interior and exterior forms of the Schwarzschild metric
only include space within a finite radius. Computing the Schwarzschild radius this way for planets and
stars where R >> rs is acceptable because gravitational energy is insignificant compared to the positive
mass-energy in those objects. Thus, the exterior Schwarzschild metric can be successfully used to
calculate the orbits of Earth satellites and the bending of light near the Sun. However, since the
surrounding space is warped and filled with negative gravitational energy, the Schwarzschild metric
only approximates the exact solution to Einstein’s field equations by not taking equivalent negative
mass into account – both inside the object’s radius and beyond it – and it becomes completely invalid
for the extreme (and impossible) case where M is contained within the radius 2GM/c2.
So what actually happens when a stellar object collapses? I think the answer is obvious: Negative
gravitational energy builds up and reduces the star’s total effective mass along with its Schwarzschild
radius. The outer radius of the star can never reach the shrinking Schwarzschild radius, so no true
event horizon can form. Nature always finds ingenious ways of preventing things from going haywire
in Her universe, and no matter how hard we try to squeeze an object inside its own Schwarzschild
radius, this attempt will utterly fail. Collapsing stars don’t form black holes because their propensity to
collapse under gravity diminishes as more and more of their effective mass is canceled by negative
gravitational energy. Space has a very powerful natural tendency to be flat, and it uses gravity itself to
iron out any “kinks” or black holes that try to form in it.

54 I couldn’t find a formula for the proper acceleration of a static test mass inside a black hole, so I can’t calculate the
tension in the rope after the weight passes through an event horizon. But I have a strong suspicion even more energy
escapes, so the effective mass of a black hole would be reduced: ΔM = – m.
55 They say a broken clock shows the right time twice a day, so even if Crowthers were 99% wrong about things in
general, he could still be 100% right about black holes.

24