Anda di halaman 1dari 161

What is

Energy?




By Dave Broyles





To use this e-Book edition, left-click
computer mouse here to go to Table of
Contents and Summary of Arguments,
then left-click section you wish
to read.


ii
Copyright in Canada by Trafford Publications, June, 2001
(first publication, with revisions)
Copyright in the United States by David R. Broyles,
1989, 1990, 1991, 1994, 2006
First printed publication by Trafford Publications, 2001
www.trafford.com
Second edition to be published by Trafford Publications, 2006
e-Book publication by the author, April, 2006
copyright in the United States by David R. Broyles, 2006
available from the author at
whatisenergy@yahoo.com
Author will supply the first 10,000 e-Book copies
via e-mail at no cost to recipients.
Price for later copies will not exceed $3.95.
NOTE TO READERS: This manuscript was made available via Trafford
Publications to the public over the Internet on June 28, 2001. This was prior to the
attacks of September 11, 2001, and before the more recent increases in world oil prices.
Publication of this e-Book edition should in no way be attributed to or confused with any
action upon the part of the United States government or other entity to respond to attacks
upon the United States, rising world oil prices, or any international war upon terrorism.
The contents of this book were fully accessible to the United States government,
its decisionmaking individuals and entities, and to any other potentially interested parties
prior to September 11, 2001. To whatever degree that various governments may have
been aware of the contents of this book, the contents of this book were completely
ignored. This is not to say that there may not have been conscientious individuals,
intelligence analysts in particular, who were well aware. This merely states that they,
like I, were ignored.
This being the case, I have chosen to leave the June, 2001, Trafford edition in
print, without substitution or revision. The Trafford edition definitely needs significant
revision. I find continued publication of this edition to be personally embarrassing.
However, there are limits to the degrees to which history should be revised. In the
interest of historical record, I have chosen to leave an obviously flawed and substandard
edition in print. Please do not purchase a copy, unless you are an historian.
Do not believe that intelligence analysts were aware of anything, unless and
until some former or retired intelligence analysts choose to step forward and tell their
stories. I cannot speak for them. We must await the time, if it comes, when they will
choose to speak for themselves. Bear in mind the fact that many nations have
intelligence analysts. The analysts who choose to step forward may, or may not, be
American. Let fate take its course. As if it wouldnt anyway.
iii
Forward to April, 2006, e-Book Edition
I recently began rereading my book for the first time since self-publication in June, 2001.
For years, I have been content to let the book languish in my files. I have never had a copy of the
published book, only the diskettes from which the book was published. As much grief as this
project had caused me over the years, I didnt want a copy of the book. I still dont.
Two characteristics of the 2001 forward now stand out in my mind. First, the 2001
forward better reflected where my project started rather than where it ended. In its final form, my
book was much less about language than it was about theoretical assumptions. What emerged as
most important was the failure of Western (European-American) culture to carefully codify the
assumptions that underlie physical theory in general, and energy theory in particular. Careful
codification ultimately involves subjecting these assumptions to rigorous examination. Is what we
are saying really necessary? Codifying that which we assume invites us to search for alternative
assumptions, and to give careful thought to these alternatives.
At the core of our theory is an assumption regarding the relationship between the
structure of our mathematics and the structure of the universe. We project the structure of our
mathematics out onto the universe. We assume the existence of nearly absolute correspondence,
which means that we need not justify the ways in which we apply particular forms of mathematics.
Geometry has given us the three dimensional coordinate system. We have therefore assumed that
fundamental principles of structure must be three dimensional, rather than single dimensional but
functioning in a three dimensional context. This set the stage for later formulation of theory, in
particular field theory and Einsteinian relativity.
Field theory assumes that field assigns qualities to otherwise empty points in space, just
as the three dimensional coordinate system assigns coordinates to these points. How do we know
this? How can we test the qualities of empty space?
Einstein then asked the fundamental questions underlying relativity, from within a
context that assumed that field must inevitably assign qualities to empty points in space. To retain
this assumption and then answer his questions, Einstein was forced to distort the values of the
units of time and distance. Is this really necessary? I argue that it is not.
However, the real issues at hand are not theoretical. They are very practical. It is
possible that a particularly wrong assumption, or a particularly wrong answer to a very valid
question, has led us in a fruitless direction? We face a long term energy shortage. Is it possible
that this shortage is rooted in our way of thinking, and not in the world in which we live? Will we
find better fruit if we pursue a somewhat different pattern of thought?
Second, my 2001 forward was excessively optimistic. That forward was actually written
during the early 1990s. At the time, I was much more optimistic than I am now. I felt that it
might be possible to force physical theorists to reconsider the assumptions that underlie
contemporary theory. I now know that I was very nave. In the United States, roughly 90% of all
research money for physics comes from the U.S. government.
Unless the governmental officials who hand out this money sanction certain new
questions, researchers will not ask them. I see virtually no possibility that this sanction will be
forthcoming. Ultimately, the questions that researchers are funded to pursue have little to do with
science, and everything to do with the nature of political leadership. The lifeblood of scientific
research is money, not ideas. The same can be said of politics.
Sanctioning new questions means sanctioning threats to established economic interests.
iv
Such threats will not receive federal sanction in our present political environment. Federal
sanction will require much stronger political leadership than we now have. I do not believe that
strong leadership is likely to emerge from either major political party, Republican or Democratic.
I am no supporter of either party. I consider any such support to be a meaningless waste of time
and effort.
Years ago, I was optimistic that such leadership might have emerged in the aftermath of
the 1990-1991 Iraq war. Perhaps we would then have understood that we could not depend upon
limitless availability of cheap imported oil. Perhaps we would rethink our technological
dependence upon oil. Quite obviously, I was wrong. I now know that every oil company
executive and every political hack knows far more about physics than any Nobel laureate. They
even know more about the future of international economics.
With regard to political and cultural leadership, the same applies to Western Europe.
Their leadership is as weak as ours.
I still believe that the questions will be seriously pursued, but probably not in the West.
We Westerners are too deeply in love with the precepts of our culture, and too confident of our
cultures future in our now-small, very international world. Excessive confidence leads to cultural
arrogance, which in turn commonly leads to error and decline. Yes, we are culturally arrogant.
Our cultural arrogance may well provide our competitors with their greatest opportunity to rival -
and surpass - our past successes. We think that we know. They know that we do not know. This
provides them with their greatest opportunity to rival and surpass us.
This is characteristic of all cultures. Each culture contains within it the seeds of its own
predestination. The same seeds that lead to rising wealth and influence in earlier centuries can
inevitably predestine a culture to declining wealth and influence in later centuries or in later
decades, or in mere years. In todays highly competitive international environment, these
processes of rise and decline can happen much faster than in the past. Small world equals very
short time horizon. Our rise may have taken centuries, but that was when the world was much
larger. Decline can take place much more rapidly in our very small world.
I seriously doubt that the crucial cultural and theoretical questions will initially be
pursued in either North America or Europe. In the United States, our political leadership is too
deeply embroiled in the Culture Wars to recognize and acknowledge the most important questions.
Most important are those questions that will become most crucial to the international status of the
United States, its industry, and its economy regardless of their impacts upon established
economic interests. Those impacts are inevitable, regardless of who pursues the questions.
Issues raised by the Culture Wars may help fill church collection plates today, and
political war chests. From a political perspective, the beauty of the Culture Wars is that they do
not threaten any established economic interest. Within political strategy, cultural warfare is a
wonderfully effective diversionary tactic perhaps the most effective ever devised. The Culture
Wars generally assume that most everything that needs to be known is already known. We can
therefore be comfortable in our cultural arrogance. We need not ask of ourselves those questions
that other cultures may ask of ours, those questions that other cultures may hope to use to obtain
relative advantage over us. Internal cultural warfare is the final refuge for weak national
leadership, leadership that looks inward rather than outward, backward rather than forward.
What happens if the Culture Wars divert us from those questions that will prove most
essential to our nations future in our small-world, international environment? Future collection
plates may become meager at best. Such are the rewards of pride and hubris. Political war chests?
Let fate take its course!
v
Instead, I believe that the most crucial questions will be pursued by Asian cultures that
have much looser ties to the cultural precepts of the West. These are cultures that are willing to
perceive the West as quirky and peculiar, and the precepts of Western culture as anything but
preordained and inevitable. They are not involved in our Culture Wars, nor do they care about
them.
Those cultures that best understand us, but are most willing to doubt us, are those that
will be most likely to challenge us. These cultures are interested in pursuing their own internal
wealth, and that dont particularly care who holds wealth and power in the United States. They
only care about what (if anything) our wealth and power mean to them. Successfully challenging
our culture will reduce both their interest and their concerns.
I anticipate that the pursuit of economic development in an increasingly resource-short
world will lead to the willingness to culturally and conceptually challenge the West. This will be
a far greater, far more pervasive challenge than the West has experienced to date and far greater
than the challenge the West is now experiencing from within the Islamic world. We of the
Christian West should bear in mind that there is much that we owe to medieval Islam, and there is
still much we have in common with Islam. It appears to me that the challenge from within the
Islamic world is looking too much toward the past, and too little toward the future. We are
fighting two cultural wars, one internal, one external.
Challenges that look toward the future rather than past will have a far more pervasive
impact. The nations from within which the challenges are most likely to emerge: Korea, China,
Japan, and possibly India. These are nations that will deal comfortably with oil wealth for as long
as they believe it to be advantageous to them. They have no love for oil wealth, Christianity
(excepting Korea), Judaism, or Islam.
If, years from now, Chinese leadership perceives Christianity and its cultural warfare as
having weakened the United States relative to China, then dont expect Chinas leadership to
welcome Christianity. What society wants to open its doors to those institutions it perceives to
have weakened its most powerful rival? If my dire expectations come true, how will ordinary
Americans react? If they share the perceptions of the Chinese leadership, then will they continue
to welcome their own American Christian churches? Will declining cultural self-confidence
among our societys well educated necessarily weaken our own Christian churches?
Watch Korea. Korea has mixed the traditional Confucian values of East Asia with the
perspectives of the Christian West more successfully than has any other nation. No nation is
better positioned to challenge the dominance of European-American civilization. Do not be
surprised if the challenge comes from Korea. Korea, small though it is, may well become the
Britain of the future. Britain gave its language to the world. Korea may give something else, but
something equally global something that will give Korea disproportionately large global
influence relative to its much larger Chinese neighbor.
Obviously, my expectations may be wrong. If I am wrong, then I insist that this preface
be retained so as to properly document my error. If I am proven very seriously wrong, then I insist
that this later preface be printed twice at the front of each printed copy of the book. This second
printing will afford me ample opportunity to eat my words.
The irony of this book is that it may prove to be a guidebook for those Asians who wish
to challenge the international stature of our nation and culture. I have told them what to look for,
and where to look.
Dave Broyles
Ninole, Hawaii
vi
Forward to June, 2001 Printed Edition
Language is culture. Language is far more than a mere means by which we communicate
culture. One cannot use the language of any culture without accepting that culture to a substantial
degree
Within any culture, conflict over ideas is really a conflict over the direction that its
language should take. Should the existing language and culture be retained with only minimal
change? Or is more radical change required, which will necessitate significant changes in the
language itself?
Cultural change and linguistic change go hand in hand. It is impossible to have one
without simultaneously having the other. Almost inevitably, those who advocate substantial
cultural change will simultaneously advocate specific linguistic changes.
Witness the linguistic changes that the women's movement has brought to English, the
pervasive shift toward sexually inclusive language. No longer is the sexually nonexplicit use male
pronouns accepted. Where no specific gender is intended, then female pronouns should be
included as well: "he or she".
For purpose of the arguments that I make in this book, I have reverted to the older,
nonspecific use of male pronouns without attaching female pronouns. This is intentional. I have
employed a linguistic strategy of my own.
I am arguing that the concepts I am challenging should have been challenged and
overturned long ago well before the advent of the contemporary women's movement. I am
attacking archaic but accepted cultural concepts using intentionally archaic linguistic forms, so as
to emphasize the archaic nature of the concepts that I am attacking.
As a matter of strategy, I argue that the alternative energy movement has made the
critical (and most likely fatal) mistake of accepting and employing the language of their
economically and technologically established opponents. It would not be overstating the case to
say that corporate interests own language just as much as they own and control patents, although
their ownership of the language is somewhat more tenuous than their ownership of patents.
Ultimately, those who control the language of any debate will determine the outcome of the
debate. Education consists of convincing others to adopt ones chosen language. In this case,
the language is that of accepted scientific doctrine.
Using accepted concepts of physics, the established technology and energy interests have
forced their opponents to fight on turf that supports the established interests. As long as these
interests can retain control of the linguistic turf, then they are assured of victory. Their opponents
will be little more than petty irritants.
There is only one way that the established interests can be defeated: Challenge the
linguistic base that supports their position. In other words, attack the scientific language of energy
and thermodynamics. Challenge this language on the basis of scientific philosophy, pointing out
all unverified assumptions and internal inconsistencies. Argue that the existing theoretical
structure fails to provide a sound foundation for its self-assumed tasks.
Attack the structure of physics itself. Point out that particular concepts emerged from
particular cultural and technological situations, and therefore can hardly be considered to be
universal and immutable. Point out that particular theories answer particular questions, but leave
us no clues as to how we might answer other, equally important questions. Point out the many
situations in which known facts and experiences contradict accepted theories.
vii
Science really isn't a structure of sound knowledge. To an even greater degree, it is a
linguistic construct that structures our thoughts. If we wish our thoughts to take new directions,
then we most consciously seek to redirect the structure of our language. What words and concepts
can we discard? What new words and concepts can we develop as new tools with which to pursue
new possibilities?
The walls of the fortresses of existing economic and technological interests will not crack
until the linguistic turf on which they rest, and from which they draw their sustenance, begins to
give way. At the point when this happens, no power of argument, money, or brute political force
will suffice to defend them. Like the walls of Jericho, the walls of the great fortresses will
collapse in response to the pitter-patter of ordinary feet.
But feet alone are not enough. Persons seeking economic and technological alternatives
must alter their own language. They must seek to develop an alternative linguistic (and scientific)
turf from which to fight. As this alternative turf becomes more solidly established, then the turf on
which their opponents reside will weaken.
Having prepared for some great battle with the existing interests, the actual outcome will
surprise them: Victory having already been assured, there will be no battle with the existing
interests.
One more surprise: The road to victory will not be paved with the properly cut stones of
rectitude, but rather with the rubble of error. Those who win will not be those who were correct
most often, but rather those who made the greatest number of ill-fated efforts. From among the
great number of ill-conceived errors, a few will contain the seeds of the future.
The longer term future will not germinate in the fertile soil of the past, but rather from
those seeds that were scattered upon rocky, infertile soil. It will be that which grew were nobody
expected it to grow that will define the future for all of our descendents. Do not condemn the
infertile rocks!
The writings that follow grew from ideas that germinated among the rocks. There is
nothing in them that warrants near term acceptance. Most of the ideas are probably mere weeds,
not fertile stalks of grain. They were written by one who found his place among the rocks, and
who chose to grow as best he could among the rocks that became his home.
The sins of error are obviously mine, but any hope for a better future belongs to all
humankind.
Dave Broyles
Hakalau, Hawaii
April, 2001
viii
TABLE OF CONTENTS
and
SUMMARY OF ARGUMENTS
CHAPTER ONE: WHAT IS ENERGY? 1
Energy isn't something we think about. It is instead a pattern of
thought that determines how we think. As such, it is intuitively
understood but undefined.
A. DEFINING BY QUANTIFICATION 2
Energy quantifies inputs and outputs but does not actually explain why
anything works.
B. DEFINING BY INTUITION 3
We understand energy because we see examples of it around us. However,
intuitive understandings do not provide us with a sound basis for critical
thought.
C. ORIGIN OF THE TERM 4
Modern usage of the term "energy" originated in the early 1800s in
England during the Industrial Revolution. Term referred to work done
by mechanical power rather than human muscle.
D. THE FIRST ENERGY CRISIS 5
After having experienced rapid growth during the 12th and 13th
centuries, Western Europe's economic growth reached resource
imposed limits in wood supply and water power.
E. THE STEAM ENGINE 6
Steam engine developed as a technology to pump water from mines.
Period of steam engine improvement corresponded with development
of industrial infrastructure.
F. DEVELOPMENT OF THERMODYNAMICS 7
Thermodynamics began as a discipline for engineers concerned
with steam engine efficiency. Theory of steam engine patterned
after mans understanding of the water wheel.
ix
G. THERMODYNAMICS RETREATS INTO MERE 8
ACCOUNTING
Realization that the steam engine wasn't just another water wheel
should have led to search for the principle by which radiant heat is
transformed to mechanical power, but did not.
H. ENTROPY 10
Theory took a grim turn as entropy predicted that man's exploitation of
energy could only contribute to the degradation of a universe that was
already degrading.
Inability of thermodynamics to explain why events happen
suggests that we may experience a major paradigm shift. Later in this
book, chapters three through five propose such as shift
I. THE CONTEMPORARY PROBLEM 11
We have a surplus of quantification and a shortage of explanation. This
shortage of explanation is rooted in the inadequacies of our existing
patterns of thought.
CHAPTER TWO: CREATION MYTH IN 14
PHYSICAL THEORY
Understanding the underlying creation mythology is vital to
understanding the epistemology of the physical sciences. We can (and
do) profess belief in Darwinian evolution, while functioning more in
accordance with scriptural creation myth. Biblical creation myth
emphasizes the importance of divine law, whereas evolution
emphasizes adaptation.
A. THE NATURE OF CREATION MYTH 16
Creation myth, Darwinian included, tell man who he is and how he can
hope to learn. Shift from biblical to Darwinian creation myth should
have radically redefined the nature of science, but did not.
B. CREATION MYTHAND SCIENTIFIC 18
EPISTEMOLOGY
Creationism survives as an underlying element in scientific
epistemology, leading to false optimism regarding the truth of theories,
and to conceptual inflexibility.
x
C. CREATIONIST ARTICLES OF FAITH 20
God rules the universe by law rather than by caprice. Created in the
image of God, both man and language retain divine potentials, despite
their fall and fragmentation.
D. CREATIONISM AND NATURAL LAW 20
Patterned after scriptural law governing man and society, natural law
limited man's adaptability because it was predictive rather than
prescriptive, leaving no freedom of choice.
E. MAN CREATED IN THE DIVINE IMAGE 23
Scheme of creation and the fall of man suggested that man had a great
unrealized potential. In science, cultural history is therefore
unimportant because the goal of science is to transcend culture.
F. TRANSCENDING CULTURE 24
Scientific textbook is a pre-Darwinian literary form that gives little
attention to culture. This inattention is because of the belief that
discovery can be independent of culture.
G. THE NATURE AND IMPORTANCE OF LANGUAGE 26
Myth of the creation of language implicit in biblical creation myth,
followed by the subsequent fragmentation of language at the Tower of
Babel, supported both extremes of optimism and pessimism about the
potentials of man's language.
H. ENERGY AS LINGUISTIC STRATEGY 28
Reuse of the ancient term "energy" escaped vernacular connotations,
suggesting transcendence. Because the word suggests something that
transcends culture, it is necessary to continually redefine energy rather
than discarding the term.
I. THE IMPACT OF DARWIN 30
By challenging the epistemological faith of the physical sciences,
evolution inevitably raised largely ignored questions about the
authority of the physical sciences.
J. EVOLUTION AND CREATIVITY 32
Mental basis of paradigm creation is primitive, giving paradigms great
power over our minds. Goals of paradigm creation are preservation
and continuity, not real change.
xi
K. OF AGRICULTURE AND THE GODS 34
Both agriculture and the early gods emerged as products of play, not as
responses to necessity. Early art led to the creation of man's second
world, the world of symbolism.
L. HISTORIOGRAPHY OF THE SCIENTIFIC 36
REVOLUTION
There exists an eternal tension between the usual pursuits of
normative science, which works within accepted paradigms, and
creative challenge to the authority of accepted paradigms.
M. THE FUNCTION OF HISTORIOGRAPHY 39
Historiographies, that of science included, are developed to support
institutional claims. Biblical creationism and Darwinian evolution
suggest different historiographic patterns.
N. THE CHALLENGE 40
The most important question concerns the transformations back and
forth between photons and mechanical power. Chapter three proposes
an answer to this question, an answer that necessarily involves
reinterpreting Einsteinian relativity. The answer that I propose left me
with no other alternative than to challenge Einsteinian relativity.
CHAPTER THREE: DOES FIELD ASSIGN 42
QUALITITIES TO OTHERWISE EMPTY
POINTS IN SPACE?
Contemporary physical theory makes one key assumption that is not
necessary and that, by its own definition, cannot be verified: We
assume that field, gravitational and electromagnetic, attractive or
repulsive, functions by assigning qualities to otherwise empty points in
space. Bodies experience field because they experience the qualities of
the points they occupy. Because we attribute the qualities of field to
empty space, and because there is no absolute perspective to which one
can orient a three dimensional coordinate system, we cannot possibly
prove this assumption.
This assumption then leads to another statement, the Lorentz
transformation that is the basis of Einsteins special relativity. If one
discards our present assumption regarding the nature of field, then one
can reinterpret the meaning of relative inertial motion, which is the
basis of Einsteins special relativity.
xii
The result is a new theory in which space plays no role at all.
All fundamental relationships are defined in terms of a single
dimension, and within which transit from one inertial perspective to
another inertial perspective produces transformation. The fundamental
equations for both matter and energy can be derived from this
relationship, which means that matter and energy are merely two
manifestations of the same relationship.
This principle of transformation explains transformations back
and forth between photon energy and mechanical power. The principle
also suggests a new theory of light to replace the particle/wave theory.
A. NEED TO CONCEPTUALLY UNIFY PHYSICAL 46
THEORY
The inverse squared law of field strength applies equally to both
gravitational and electromagnetic fields. We have never been able to
derive both inverse squared laws from the same statement.
B. HISTORICAL BACKGROUND 49
Despite Einstein's rejection of both Newton's absolute space and the
aethereal hypothesis, he retained Maxwell's equations, which were
based upon aether.
C. TESTING THE FIELD QUALITIES OF POINTS IN 52
SPACE
It is impossible to test for qualities of points in space. The very nature
of contemporary field theory renders itself impossible to prove.
D. WHAT CAN BE MATHEMATICALLY DEFINED 55
Fundamental, two particle relationships can be defined in terms of
attractive and/or repulsive information flowing back and forth between
two particles. Although this hypothesis is no more readily subject to
proof than existing field theory, it does facilitate development of a
simpler and more inclusive physical theory.
E. THE MEANING OF THE BIG BANG 58
If information flows between particles, and if this flow is the basis of
all experience, then the origin of this relationship was the Big Bang.
We can only experience that which originated at the Big Bang with us.
xiii
F. THERMODYNAMIC TRANSFORMATIONS 58
Any unit of information that transits from a particle in one inertial
perspective to a particle in another inertial perspective must alter the
mutual velocity of the sending and receiving particles. This explains
the inverse squared law of gravitational and electromagnetic field
strength and the transformation of radiant heat to mechanical power in
the steam engine.
G. THE IMPLICATIONS OF COMPOUNDING 60
Because of their enormous rate of compounding, individual
transformational events that appear too small to have any practical
significance are, in fact, significant.
H. DERIVING THE FORMULA FOR KINETIC 62
ENERGY
Classical formula for kinetic energy is derivable from the same
principle of transformation as is the inverse squared law of field
strength, meaning that the formulas for matter and energy are
manifestations of a unifying principle.
I. HISTORY OF THE CONCEPT OF FIELD 65
Assigning qualities to points, either occupied or empty, stated that the
essence of field was independent of our experiencing of it, a statement
that the inverse squared law as derived from the principle of qualitative
transformation denies.
J. DERIVING THE INVERSE SQUARED LAW 71
Inverse squared law as derived from the principle of qualitative
transformation claims that it is the relative movement itself that
produces the change in experienced field strength.
K. PREDICTING THE PHOTON 76
The photon is merely our way of experiencing transformational
changes in field intensity, meaning that field and radiation as concepts
are inseparable from each other.
L. PRINCIPLE OF PHOTON EMISSION 78
Photons are experienced because the conservation of momentum
applies to gravitational transformations, but not to electromagnetic
transformations.
xiv
M. PHOTON FREQUENCY 81
Experienced photon frequencies are a function of the electron's rate of
orbital rotation around its nucleus at the time when the electron jumps
from one shell to the next. This should explain spectra and the
photoelectric effect.
N. THE PATH A PHOTON TRAVELS 83
The photon is a transformation experienced as a wavelike change in the
intensity of the information that travels along single dimensional lines
from particle to particle.
O. THE POSITION OF LIGHT SOURCES 85
We always experience light as coming from the present relative
position of the emitter. This is because we can never say that the
position of any receiving particle has shifted.
P. INVERSE SQUARED LAW OF THE INTENSITY OF 86
RECEIVEDLIGHT
This is merely another manifestation of the inverse squared law of field
strength, meaning that changes in distance from the emitter alter the
intensity of individual photons, not the quantity of photons.
Q. OPTICS AT EXTREMELY LOW LIGHT 87
INTENSITIES
Probability waves of quantum are unnecessary because the strength
of a single photon can be subdivided without limit.
R. THE CONSTANT VELOCITY OF LIGHT 88
EN VACUO
Light assumes its velocity relative to the last body or medium with
which it interacts, meaning that we cannot interact with light without
re-establishing its velocity as relative to our own perspective.
S. THE THEORY OF REFRACTION 90
The theory of refraction must recognize that light travels at the same
velocity from particle to particle within solids and gases as it does in
empty space because all particle-to-particle distances are empty.
T. DERIVING ELECTRICITY 90
Electricity must exist for the same reason that light must exist.
Therefore, the theories of optics and electricity should ultimately
merge.
xv
U. PHYSICS AND ENGINEERING 91
The concerns of physics are primarily epistemological, whereas those
of engineering are more practical. We need to codify and acknowledge
all assumptions that we make in structuring out theory.
CHAPTER FOUR: VIOLATING ENTROPY 92
We owe it to ourselves to rebel against entropy, and to systematically
violate entropy by all means that we can. This effort must involve a
carefully disciplined effort to better understand ourselves as humans,
for in this quest it is we rather than the universe who are the
adversaries. Pursuing this quest, we must catalogue and attempt to
develop all means by which it might be possible to violate entropy,
however unlikely or impractical these means might initially seem.
A. STRATEGIES TO OVERCOME ENTROPY 93
All efforts to overcome entropy must seek to harness the absolute,
undifferentiated temperature of the environment. Strategies can
involve optically concentrating ambient radiation, converting it
into electricity, or transforming it into mechanical power.
B. OPTICAL DIFFERENTIATION 95
Passive optical differentiation seeks to create temperature
differences from undifferentiated ambient temperatures by
optically biasing the flow of ambient radiation, which naturally
flows in all directions, not just from hot to cold.
C. PHOTOVOLTAIC CELLS 98
Any silicon diode is also a photovoltaic cell. Theory explains
why diodes favor current flow in one direction only, but does not
explain how this transforms electromagnetic radiation into direct
electrical current.
D. HOW MECHANICAL REFRIGERATION 99
VIOLATES ENTROPY
Refrigeration violates entropy because its real principle involves
dual, opposing transformations rather than pumping that would be
analogous to purely mechanical processes.
E. THEORY OF THE FAN 101
Any fan that accelerates the movement of air molecules relative
to its environment increases linear distances between air molecules,
thereby transforming ambient heat to mechanical power.
xvi
F. THEORY OF RESILIENT SOLIDS 103
Imperfect resilience results from the imbalance between two
opposing transformations that take place simultaneously within
the deformed solid. Imperfection includes the transformation of
ambient heat to mechanical power as well as the transformation
of mechanical power to heat.
G. AUTOMOTIVE TRACTION 105
There are two kinds of traction, involving imperfect and
superperfect resilience respectively. Superperfection merely
reverses the sequence of events involved in imperfection.
Although we presently attempt to use only imperfect traction,
it should be possible to develop superperfect traction as a
feasible technology. If this happens, then tires will become a
source of mechanical power.
H. THE IMMOVEABLE EARTH 110
The presently accepted hypothesis that the vectoral forces our
machines apply to the mass of the earth cancel, thus averting
energy waste, cannot be correct. This is because we cannot in
any way alter the earths rotation. Therefore, the earth cannot be
an equal but opposite body to which the conservation of
momentum applies.
I. ORBITAL MECHANICS AND TIDES 113
The tides and their accompanying distortion of the earths shape
may help to explain many geophysical phenomena, and cast doubt
upon the absolute conservation of energy.
J. ORBITAL MECHANICS WITHIN THE SOLAR 114
SYSTEM
Similar phenomena within our solar system can be explained using
the same hypothesis as the tides, and further cast doubt upon the
absolute conservation of energy.
K. THE MEANING OF MANAGEMENT 116
The function of theory is to enable engineers to perceive
possibilities that would otherwise escape their attention, oversights
that doom us to continue indefinitely in the same rut.
xvii
CHAPTER FIVE: RESONANT 117
TRANSFORMATIONS
Transformational relationships involving a rotating body such as a
nucleus can resonate much as a guitar string resonates at its resonant
frequency. Rotation of the body creates a set of equally spaced
distances at which the intensity of the bodys relationship with
electrons or other nuclei will exceed those predicted by the inverse
squared law. This increase results in a set of interrelated phenomena,
Including increased nuclear mass and charge, radioactive half-lives,
electron shells, and the spacing of nuclei in molecules and crystals.
A. RESONANCE ALTERS INVERSE SQUARED LAW 118
Resonance along single dimensional line of interaction increases
experienced field strength at particular distances to levels above
hat predicted by the inverse squared law.
B. THE PRIMACY OF ANGULAR RELATIONSHIPS 119
For our universe to be other than purely explosive or implosive,
attraction must exceed repulsion. Angular relationships prevent
implosion, enabling structure.
C. RETAINING ELEMENTS OF CLASSICAL PHYSICS 119
It will probably be necessary to retain inertial mass as a quality of
particles, and probably to acknowledge the existence of some locally
absolute perspective that establishes the degree of angular momentum.
D. DEFINING THE ANGULAR PERSPECTIVE 121
All angular perspectives are defined with reference to that matter
within reasonable proximity, whether it be other atoms or other stars.
It is unlikely that we will be able to attribute angular perspectives
entirely to informational interactions.
E. RESONANT TRANSFORMATIONS 124
Internal orbit of the nucleus will cause resonant increases in
experienced mass and charge at certain distances from the nucleus.
F. ATOMIC MASS 126
Because of the existence of resonance, nuclei will be experienced
as having greater mass and charge than the sum of their constituent
particles.
xviii
G. ATOMIC HALF-LIVES 127
There is a correlation between the probabilistic nature of the
resonance-caused increase in atomic mass/charge and the
probabilistic nature of radioactive half-lives, suggesting a
common cause.
H. THE SIMPLE HYDROGEN ATOM 129
The single proton that constitutes the nucleus creates electron
rings, suggesting that it is the individual protons rather than the
nucleus as a whole that create electron shells. This will prove
particularly useful in explaining molecular and crystalline structure.
I. ELECTRON SHELLS 130
Because certain distances from the nucleus are resonant and
other distances are not, resonance will produce a set of concentric
shells that respective allow and deny occupancy.
J. STANDING WAVES 132
The emissive nature of field creates standing waves that are
structural in nature. These include the gravitational waves that
Einstein predicted, although these waves manifest themselves as
structure rather than emission.
K. THE CLOUDLIKE BEHAVIOR OF ELECTRONS 135
Interactions between orbiting electrons and particles within the
nucleus will cause electron orbits to precess. However, there may
be a better explanation for the retention of distances among nuclei
than such cloudlike behavior.
L. MOLECULAR AND CRYSTALLINE STRUCTURE 137
The same resonance that binds electrons to particular atoms by
confining them to electron shells will also repel nuclei of other
atoms that get too close. Therefore, it is internuclear repulsion
rather than electron clouds that keep otherwise attractive nuclei
apart. Internuclear repulsion is far more powerful than the mutual
repulsion of electron clouds.
M. THE SIGNIFICANCE OF THE ATOM 138
The atom is a manifestation of certain interactive principles that
structure interparticulate interactions. Because these interactions
are so stable and predictable, we experience them as structure. The
future will emphasize the principles that give rise to this stability
and predictability, rather than emphasizing structure itself.

1
CHAPTER ONE:
WHAT IS ENERGY?

In August, 1990, Iraq invaded Kuwait. Iraq had long claimed that Kuwait was actually a
province of Iraq, even though the Iraqi government had never controlled it. Iraq had a strong
economic interest in attempting to gain control of Kuwait. Kuwait then did, and still does, have
some of the most lucrative oil fields in the world, making Kuwait a small but very wealthy
country.
At the time, several commentators referred to this crisis as being an oil war. To most
of the rest of the world, this grab for oil resources threatened both regional stability and longer
term global economic stability. Iraq claimed nationalistic motives, but the potential for economic
gain was undeniable. Who would control the worlds largest reserves of petroleum?
Mideast oil is particularly valuable for two reasons. First, oil is our most versatile source
of energy. Oil can be transported and burned easily. At room temperatures, oil and alcohol are
the only practical liquid fuels that we can use to power automobiles and airplanes. Oil also
produces a high heat output per kilogram of fuel weight.
Second, Mideast oil can be produced very cheaply. The actual cost of producing Mideast
oil is far lower than for any other source, any other liquid fuel. This is why the control of
Mideastern oil is such a valuable asset. Most of the market price of this oil is pure profit.
All of us know what oil is, and can appreciate what it would mean not to have oil.
Without oil, the America of today would be much as America was in 1910. We would be highly
dependent upon railroads and steamships for transportation and shipping. We would not need a
highway system.
Communities would be tightly clustered around railroad stations and navigable rivers.
Larger communities would have electric trolleys or streetcar systems powered from overhead
lines. Most people would use public transportation. More affluent citizens would have electric
cars for use around town, but would use the train to travel. Todays suburbia would not exist.
Farmers would have steam tractors for use in the fields, but would probably use horse
drawn wagons for most hauling. Most of our products would be made of natural materials. There
would be few plastics or synthetic fibers.
Alcohol-powered aviation would be in its infancy. There would be no long-range
bombers. The tanks of Hitlers blitzkrieg would never have been developed. Horses would still
be moving cannon around on the battlefield.
All of us can imagine what life would be like without energy from oil. But can any of us
define exactly what energy is?
Energy is, wellerum.
Exactly! Actually, wellerum is an excellent definition. Sciences definition of
energy is like U.S. Supreme Court J ustice Potter Stewarts definition of pornography: I know it
when I see it.
This is not to make light of science at all. As Richard Feynman defined energy:

It is important to realize that in physics today, we have no knowledge of what energy is. We
do not have a picture that energy comes in little blobs of a definite amount. It is not that way.
However, there are formulas for calculating some numerical quantity, and when we add it all

2
together it gives us 28 always the same number. It is an abstract thing in that it does not
tell us the mechanism or the reasons for the various formulas.
1


A. DEFINING BY QUANTIFICATION

Accountants count widgets and determine their profitabilities. Physicists measure
energies and determine their availabilities. Both are engaged in acts of accounting.
The great strength of accounting is that it does quantify. This is also its greatest
weakness. Accounting quantifies without explaining the underlying reasons for the phenomena
that it quantifies.
In business, accountings quantification functions as a management tool. Accounting
data can provide useful information regarding the business environment, the costs of production,
profitability, and the market itself. However, the data itself does not explain why these factors are
as they are.
Ultimately, these underlying factors must be explained in other than quantitative terms.
One cannot ask Why?, and then supply a thorough answer in wholly quantitative terms. If this
were possible, then there would be no distinction between the act of accounting and that of
management. Accounting itself would be management, rather than functioning as a mere tool of
management.
The same applies to the physical sciences. Energy theory can be used to quantify
physical phenomena. However, the mere act of quantification does not answer the question
Why?. Quantification points us in the direction of the underlying principles, but does not
explain them. Having quantified, we must then make an intuitive leap toward some understanding
of underlying principle. This is a leap toward understanding that energy theory does not attempt
to make.
As a tool of discovery, quantification is absolutely essential. Like accounting in business,
there can be no substitute for it. However, one cannot reasonably manage a society and its
technological endeavors without answering the question Why? something works the way it
does. Our explanation will inevitably involve a mathematical formula that leads to quantitative
results. However, the essence of the formula itself is not quantitative. The formula is instead a
statement of our understanding of a relationship. The formula has value for understanding only if
we understand what the various terms in the formula actually mean, and how they relate to each
other.
Energy itself is something that we quantify. In and of itself, the concept of energy
explains nothing. Energy and its principle of conservation are to the physical sciences what
accounting and its principles are to business.
In the case of oil, the really important question isnt how much and at what cost. The
really important question concerns the physical principles that our oil-based technologies employ.
Do these principles require us to use oil or some combustible substitute, or will they point us
toward new and previously unanticipated technologies? Is oil black gold, or is it merely a gooey
black liquid?

3

B. DEFINING BY INTUITION

Our environment contains vast energies, few of which we can practically tap. By
historically-accepted analogy, energies can be likened to the waters of the earth. There are vast
oceans that contain incredible quantities of water. By contrast with these vast quantities, there are
little streams of water that flow from the land into the oceans. Even the mightiest river is but a
little stream when compared with the vastness of the oceans!
We can erect water wheels to tap the energy of the flowing streams. This energy is
accessible to us. Gravity is drawing this water downward toward the ocean. Gravity causes the
water to flow, allowing us to tap the power of its fall.
With the exception of a few situations in which we can tap the power of the tides, we
cannot tap waters of the oceans to power a water wheel. The waters of the oceans have reached
their lowest elevation. Gravity will not cause the waters to fall any farther.
This is something that we intuitively understand. We recognize these facts just as we
intuitively understand what energy is. We intuitively understand what energy is because we
understand such models as the water wheel. When someone asks us to define energy, we use such
models as bases for our explanations. Our explanations are really analogies, not definitions. We
can illustrate, but we cannot define.
There is a definite problem with intuitive understandings: They do not lend themselves
well to critical thought. How can we criticize a concept that we cannot really define, and that we
can explain only by analogy? Criticizing the concept of energy is like shooting at a moving duck
(or quail) that one cannot see.
The concept of energy has become like one of the mind-dominating archetypes of
J ungian psychology. It is not really something about which we think. Rather, it has become a
pattern that determines how we think.
In business, our pattern of thought and the products about which we think can be
separated from each other. This is because we have concrete concepts as to what our products are.
We have no such concrete concept of energy. Consequently, we are unable to separate the object,
energy, from the pattern we use to think about it. Management in the physical sciences is mired in
a problem that would be generally foreign to managers in business.
In business, we can distinguish between a dollar and a widget. We can separate our
thought about profit from our thought about the product we sell to make a profit. In physics, we
have been unable to make any such clear distinction. Energy is both the accounting procedure
AND the item about which we think. The accounting system has come to wholly dominate our
consciousness.
In our understanding of energy, our pattern of thought and our technological culture have
blended together into a single entity. We intuitively grasp what energy is because we see
examples of energy among the technologies of our culture. Once we have intuitively grasped what
energy is, then this intuitive understanding governs our perceptions of future possibilities.
Our pattern of thought and our technological culture have joined hands with each other,
each referring us to the other. As soon as we ask a question of one, it hands us off to the other.
We ask our technological culture what energy is. Our technological culture refers us to science.
We then ask science. Science refers us back to our technological culture. Round and round we
go, again and again, tracking our way around an endless circle.

4
We have asked science what energy is. Science cannot answer this question. We should
look instead at the history of our technological culture. Why has our culture given birth to this
concept? What were the technological questions that led to this ill-defined abstraction? Is it even
reasonable to call energy a concept?

C. ORIGIN OF THE TERM ENERGY

The modern spelling of energy was first used in 1599. The term referred to qualities of
person, language, and expression, not to matters relevant to physical theory. Aristotle had used
the term energia to denote a specie of metaphor that calls up a mental picture of something acting
or moving.
Partially following Aristotles lead, later Latin writers used the term energia in a looser
sense to refer to the force or vigor of verbal expression. In the eighteenth century, an energumen
was not a scientist, but rather an individual possessed by a devil or a fanatical enthusiasm
one who might be burned at the stake or hanged.
The term energy was adapted into physical theory in 1807 by English physicist Thomas
Young. Borrowing from ancient Greek, energy combined en, meaning in, with ergon, meaning
work. Translated literally from ancient Greek, energy means in work.
2

The simplest nineteenth century definition of energy defined energy as anything that can
do work. Work is defined as involving the movement of any body possessing mass. This is
hardly a profound definition. It is merely a literal restatement of the terms earliest etymology.
This early definition does, however, tell us something about the culture in which the
concept originated. The English culture of 1807 was in the throes of an Industrial Revolution.
Characteristic of this revolution was the use of repetitive, rotary power in place of human muscle
power.
Machines could work just as human or animal power could work. It was possible to use
the inanimate forces present in nature to work just as man himself had worked before. Rotary
machines had replaced the slaves of the ancient Mediterranean and the serfs of medieval Europe.
This rapidly expanding revolution recognized that work (ergon) was present in nature itself.
Two eighteenth century events led to the Industrial Revolution. One was the invention of
the steam engine. The second was the use of coal in place of wood as a fuel to produce iron. Both
developments took place in England.
Neither, however, involved a truly radical departure from Englands previous
technological culture. The antecedents to both dated back several centuries.
When one examines these antecedents, one comes to an obvious realization: There was
good reason why the Industrial Revolution took place in Northern Europe rather than around the
Mediterranean or in the Middle East.
The medieval industry that had preceded the Industrial Revolution required two things:
forests and streams. By medieval times, the Mediterranean region had been largely denuded of
forests. The region had never had large quantities of freely flowing water.
When reading the Bible, one reads of J acobs well, not of J acobs water wheel. The
reason is obvious: Israel needed water for its people and cattle, but had little flowing water to
power water wheels. What the Mediterranean region needed was wells and aqueducts to solve
water shortages, not water wheels to harness water surpluses. J acobs well would never have been
important had water been available in surplus quantities.

5
By contrast, the northern lands inhabited by the Germanic tribes had plenty of flowing
water and were mostly forested. Forests were necessary to produce iron. Until the eighteenth
century, all iron was smelted using charcoal. Charcoal was made by slowly burning piled wood,
using the heat of the fire to convert wood into charcoal.
One did not make iron unless one had large quantities of wood at ones disposal. The
vast forests of the north supplied the wood used to smelt nearly all of Europes iron. As vast as
these forests were, however, they could only supply so much charcoal. Consequently, only so
much iron could be produced.
A similar situation existed with regard to water power. There was only so much water
flowing down the various streams, and only so far that this water could fall. Consequently, there
were natural limits imposed upon the availability of water power.
Water power was an intuitively obvious possibility. It took no special genius or act of
abstraction to invent the water wheel. Consequently, water power had developed early in the
medieval history of Northern Europe.
As early as 1086, William the Conquerors survey of taxable assets in southern England
recorded the locations of 5,624 water mills, one for every 50 families. This survey became known
as the Doomsday Book because those who owned the listed assets were allegedly doomed to pay
taxes on them forever.
3
By the standards of later centuries, the England of 1086 was relatively
primitive and undeveloped.
Water mills were particularly attractive assets for taxation. Once built, they produced
predictable annual incomes with very little upkeep. Per dollar invested, the return was high. This
being the case, water mills became the first industry to issue shares of stock.
As the iron industry developed, water mills were used to power the bellows that pumped
air into the blast furnaces. The industries of ironmaking and water power had met. As other
industries developed, more and more uses were found for the rotary power supplied by water
mills.
In terms of their structural engineering, the great cathedrals of Northern Europe were
hardly primitive. They were the first buildings to use iron for structural reinforcement. As such,
they were the precursors to todays skyscrapers, just as the medieval industries were precursors to
our modern industries and to our concept (?) of energy.
4


D. THE FIRST ENERGY CRISIS

In Northern Europe, the twelfth and thirteenth centuries were a period of relatively rapid
population growth and economic development. By the fourteenth century, however, this growth
was stagnating. Economic growth was reaching a resource-imposed limit. Only so much water
power could be tapped. The formerly great forests could produce only so much wood to make
iron.
Europe was now experiencing its first energy crisis. Both wood and flowing water were
renewable but finite resources. With finite resources having been pushed to their limits, one finds
litigation over the control of flowing water.
Each litigant had a vested interest in some particular distance that his water would be
allowed to fall. If his neighbor downstream heightened his dam, then this reduced the power the
litigant could obtain from his water wheel upstream.
The twelfth and thirteenth centuries were the first Industrial Revolution. By the

6
fourteenth century, industry was well established in Northern Europe.
5
Italy, lacking the forests
and streams, gave us much of the literature and art of the Renaissance. Italy, however, never
developed an industrial base comparable to that of the North.
It was this already established industrial base that made Northern Europe receptive to the
industrial innovations of the eighteenth and nineteenth centuries. The most important innovations
were the two that solved the previously existing resource limits.
These two innovations related to ironmaking and rotary power. Coal could be used in
place of wood to make iron. Although the iron was initially inferior to that made with charcoal, it
could be made in vastly larger quantities. Iron could be made in unlimited quantities as long as
ore and coal were available.
Similarly, rotary power could be supplied by the steam engine. The quantity of power
produced would be limited only by the availability of fossil fuels. Industry need no longer depend
upon forests and streams. Fossil fuels would allow unlimited industrial production and economic
growth.
Initially, the problems that fossil fuels overcame were economic problems, not
technological ones. Fossil fuels were used to perform functions that were already being performed
within their cultures. There were definite, and in fact preferred, alternatives to fossil fuels.
Coal had previously been mined in limited quantities to heat buildings and fire
cookstoves. However, wood was considered to be a more desirable fuel. Because coal commonly
contained sulfur, it burned with a foul odor. Wood was the fuel of the wealthy. Coal was mined
for the poor.
6


E. THE STEAM ENGINE

The steam engine developed primarily as a coal mining technology. Mine owners needed
to pump water from their mines. Rarely was it possible to use water power to do this. One could
either use animal power or employ a new technology.
In the early decades of the steam engine, water power remained unquestionably cheaper
than steam power. Steam engines were practical only where water power was unavailable.
Ideally, a large supply of cheap fuel should be readily available. Coal was available at the coal
mines. There was no need to transport the fuel even a short distance.
The earliest steam engines were unwieldy contraptions. A five horsepower engine would
require at least 100 square meters of land area. Its appearance resembled a modern but oversized
oil well pump. As engines were improved in the early 1800s, their sizes shrank drastically and
their power output increased substantially.
The first steam engines were known as atmospheric engines. They relied upon
atmospheric pressure rather than steam pressure for their power. Steam entered the cylinder at
atmospheric pressure, and was then condensed to liquid within the cylinder. This condensation
produced a drop in pressure. Atmospheric pressure then caused the piston to move, reducing the
volume remaining within the cylinder.
Atmospheric pressure could only produce a limited amount of power per unit of cylinder
size. It required a huge cylinder to produce five horsepower. After 1800, steam engine operation
went from negative to positive steam pressures. Power increased while engine sizes shrank
dramatically. It was now feasible to use steam engines to power steamships and iron wagons, the
railroads.

7
Prior to 1800, no nation possessed the basic infrastructure needed to support a modern
industrial economy. Inland transportation was slow and primitive. Only small loads could be
transported.
The early nineteenth century was the period of infrastructure development. This early
infrastructure consisted primarily of transportation capabilities. Railroads were built, combining
the steam engine with rails made of coal-fired iron. This was to become the age of iron horses and
iron men.
Canals were built, allowing draft animals to pull far larger loads than they could pull over
rough roads and over hills. Steamboats could travel under their own power upriver as well as
downriver.
Industrial development had been freed from the medieval requirements for forests and
streams. Coal (and later oil) could be substituted for the forest. Steam power could be substituted
for water power. Transportation freed industry to located considerable distances from its sources
of raw materials. It was now possible to build industrial cities.

F. DEVELOPMENT OF THERMODYNAMICS

It was this culture of iron horses and iron men that gave birth to energy theory. When we
talk about energy theory in an economic sense, we are talking about the discipline of
thermodynamics.
In essence, thermodynamics is an accounting system. How much mechanical power can
be obtained by burning a lump of coal? Accounting asks a similar question about widgets: How
much profit can be earned by manufacturing and selling a widget?
Steam engines were being made ever more efficient and ever more powerful. The
question that gave birth to thermodynamics was that of limits: Engineers wanted to know if there
was some ultimate limit to the efficiency that the steam engine could achieve. If so, what was it
that set this limit?
There had never been any reason to ask this question about the water wheel. It had long
been known that the power output of the water wheel was limited by both the quantity of water
that passed through it, and by the distance that the water was allowed to fall. As previously noted,
these facts were accepted by courts of law.
With regard to the development of thermodynamics, one can note a divergence between
the concerns of academic science and those of more immediate concern to engineers. Engineers
focus more narrowly on the matter of efficiency, a concern of accounting. When all one needs to
do is justify ones accounting system, a vague or analogous definition of principle may suffice.
Thermodynamics began as a discipline for engineers, not one for philosophers or for
scientists seeking to understand the unifying principles of the universe. Engineers were concerned
with the practice of industry, whereas academic theorists were not. Being associated with the
financially more conservative segment of English society, physicists drew their income from land
rents rather than industrial profits. Having been named to academic chairs, these physicists were
entitled to tenure. Tenure was the income from land rents. The land had been set aside, usually by
endowment, to support the academic position. Until the nineteenth century, the English economy
was still more agricultural than industrial.
Neither academic physicists nor the persons who set up their tenures were inclined to
speculate financially in mines or mining technology. Persons who invested in mines were known

8
as adventurers. Physicists were not adventurous. As part of the more polite segment of society,
academic physicists rarely if ever had reason to enter mines.
Throughout the eighteenth century, physicists remained preoccupied with the traditional
concerns of physical theory, concerns that had been handed to them by their academic
predecessors. These concerns extended little further into industry than the mechanics of falling
water. Physicists scarcely noted the existence of the steam engine.
The earliest theory of the steam engine borrowed from mans earlier experiences with the
water wheel. This theory assumed that heat was a subtle fluid, called caloric, that naturally flowed
from warm bodies to cold bodies, just as water flowed from higher to lower elevations. In
essence, the steam engine was merely a new form of water wheel. The steam engine did not
transform the heat that passed through it. Rather, it merely drew power from the pressure of the
flow of heat.
French engineer Sadi Carnot, the earliest of the noted thermodynamicists, employed this
theory. In his own words:

The production of motive power is then due in steam-engines not to an actual
consumption of caloric, but to its transportation from a warm body to a cold body, that is,
to its reestablishment of equilibrium.
7


He then drew a direct correlation between the steam engine and the water wheel:

we can compare with sufficient accuracy the motive power of heat to that of a
waterfall. Each has a maximum that we cannot exceed. The motive power of a
waterfall depends upon its height and the quantity of the liquid; the motive power of heat
depends also on the quantity of the caloric used, and on what may be termed, on what in
fact we will call, the height of its fall, that is to say, the difference of temperature of the
bodies between which the exchange of caloric is made.

This gave us the second law of thermodynamics: Heat flows from warmer bodies to
cooler bodies. This means that there exists a definite sense of direction toward which events will
tend. Energies tend toward equalization, just as water tended to flow ultimately toward the sea.
In 1843, English physicist J ames P. J oule experimentally established that heat was, in
fact, being transformed into mechanical power. Caloric theory had been wrong. A certain
quantity of heat was transformed into a certain quantity of mechanical power, and vice versa.
Because energy was conserved in any such transformation, a certain quantity of heat must equal a
certain quantity of mechanical power. J oule had discovered the first of physics transformational
equivalences.
8


G. THERMODYNAMICS RETREATS INTO MERE
ACCOUNTING

Within a couple of generations of discovery, the existence of actual transformation
should have fundamentally changed our patterns of thought. Having discovered that
transformation was taking place, we should have sought a clear and simple conceptual
understanding of the principle of this transformation. We should then have

9
exploited this understanding to structure our pattern of thought. We could then have emphasized
actual management rather than mere accounting.
This never happened. What happened instead was that thermodynamics avoided, and
later de-emphasized, the pursuit of principle in favor of acts of mere accounting. Thermodynamics
became preoccupied with the question of efficiency rather than that of principle. Efficiency, the
question that had led to the birth of thermodynamics, and later to its discovery of actual
transformation, ultimately de-emphasized efforts to understand the principle of transformation.
Physics and engineering became closely intertwined, even though their purposes are quite
different. The function of engineering is to design and build better widgets. The purpose of
physics is to attempt understanding of the principles that underlie all physical phenomena. One
can build widgets quite successfully without understanding the universe. Efficiency is a key
concern of engineering. Unless and until there is some possibility of radical technological change,
the principles that underlie the universe are of little relevance or interest to engineering.
When physics incorporated thermodynamics into its body of thought, it also incorporated
the primary concern of engineering: efficiency. Mere accounting became a science unto itself,
wholly self-sufficient within itself. Thermodynamics developed into a philosophy that mandated
the inevitability of particular outcomes to physical events, but then never explained the principles
that produced the outcomes. The drafting board of engineering became the essence of science.
J oules discovery of actual transformation forced theorists to discard the fluid (caloric)
theory of heat. The new theory that emerged, and that still prevails today, is the kinetic theory of
heat: Heat is the random movement of individual molecules. This random movement contrasts
with the more organized and orderly movement of pistons, rods, turbines, and wheels.
The new kinetic theory recognized that heat and mechanical power were distinguishable
from each other in practical terms. In theory, however, the kinetic theory denied the existence of
any genuinely qualitative distinction. Our earliest response to the discovery of transformation was
to deny that transformation was genuinely qualitative. Earlier, this was how thermodynamics
itself had originated. Thermodynamics had denied that there was any genuine difference between
the water wheel and the steam engine.
In lieu of any genuine qualitative difference, both heat and mechanical power were
segments of the same continuum of mechanical motion. Heat exists at the lower end of this
continuum, at the level of particulate motion. The practical motions of mans machines exist at a
much higher level, at the level of the motions of corporate bodies such as steel pistons and shafts.
The earlier two laws of thermodynamics, based upon the literality of the analogy with
water power, were nonetheless retained. Rather than stating literal truth, however, water power
became the archetypal analogy used to explain the steam engine. This analogy justified the use of
thermodynamics as an energy accounting system, without really explaining what was happening in
the steam engine.
We could understand energy intuitively, recognizing it when we saw it. We could
explain the principles of energy by analogy, without ever achieving an understanding of the
underlying principle of transformation by which the steam engine functioned. When one can
explain without actually understanding, then ones explanation masks to true depths of ones
ignorance. We deceive ourselves into believing that we know, when in fact we are totally
ignorant.
Such were the intellectual methods of primitive religion. When one heard thunder, one
knew that it was Thors hammer. A very large percentage of the ancient documents that we
discover today were actually accounting sheets. Bookkeepers counted cattle and measures of

10
grain. Priests counted the numbers of animals that had been sacrificed to placate potentially angry
deities. One need not really understand if one could sacrifice to placate the deities. One already
knew all that one really needed to know.
The development of thermodynamics was characterized by a tension between the
requirements of accounting and those of genuine understanding. Ultimately, it was accounting
that came to dominate the discipline. Thermodynamics gave up any pretension of explaining
principle, of answering Why?.

H. ENTROPY

In 1864, Rudolf Clausius first stated the concept of entropy. Entropy was the natural
progression from order to chaos, a twofold degradation of order. First, entropy was the shift from
orderly movement of corporate bodies such as pistons and shafts to the disorderly, random
movements of individual molecules.
Second, entropy involved temperature averaging. This meant that the degrees of relative
motion among the molecules would be averaged, resulting in some common degree of motion.
Gaseous temperatures and pressures should naturally equalize.
Entropy is the natural progression of energy from a useful to a useless state. Although
energy can be neither created nor destroyed, potentially useful energy inevitably degrades from
useful to useless. Entropy is the measure of the universes growing uselessness.
9

Clausius major contribution to theory was his contention that man could not even
momentarily reverse the natural tendency toward entropy without simultaneously producing even
more entropy. The entropy that man must produce will equal or exceed that which he
momentarily reverses.
The steam engine momentarily reverses the normal tendency toward entropy. To do this,
the engine must produce more heat than it can transform to mechanical power. It will expel waste
heat. Every operation and every movement within the machine will then expend some portion of
this mechanical power. Ultimately, all of the power the engine produces contributes to the entropy
of the universe.
The old technology of water power had relied upon the clouds and the rainfall to
perpetually replenish its streams. There was a natural cycle that was perennially self-replenishing.
The new doctrine of entropy offered no such hope. All of mans activities would add to the
increasing uselessness of the universe.
The universes degradation of useful energy to useless energy would take place whether
man participated or not. Man was doomed to fight his environment so that he could productively
participate in this degradation. In doing so, he would accelerate the processes of degradation.
Even at the heights of its most pessimistic otherworldliness, the Middle Ages had never
postulated such a bleak future. Under the medieval world view, history would culminate in divine
justice. There would be a final accounting, an event at which all old accounts would be settled and
the books closed for all eternity.
Under entropy, the world would end in a nullity, a useless state in which the events of the
past would have no meaning. The past would have vanished into the vast ocean of
undifferentiated sameness. There would be no perpetually replenished streams flowing into this
ocean.
The primeval chaos of precreation Genesis would be restored. There would be neither

11
light nor darkness, neither land nor water. There would be no fish in the sea, nor birds in the air.
There would be no final accounting. There would be no reason for such an accounting.
The acts and the events for which we had accounted would have ended. They would have brought
us to a state where further accounting would be useless. There would no useful number that could
be placed on a balance sheet or cash flow statement. Units of useful energy had been
thermodynamics medium of exchange. There would be no remaining medium of exchange.
This was nihilisms finest hour. Never had nihilism known a more articulate advocate.
Mans rendezvous with his own destiny was hopelessly pessimistic, his future a conflict between
the potentials of his intellect and the limitations of his universe.
The small, voracious mechanical lizards of the early days of the second Industrial
Revolution would become the environment-devouring giants of the twentieth century. As our
environment became depleted, the dinosaurs of today would become the useless skeletons of
tomorrow.

I. THE CONTEMPORARY PROBLEM

The real problem we face with energy today is more intellectual than physical: We have
a surplus of quantification and a shortage of genuine explanation. Thermodynamics has given up
even attempting explanation.
Thermodynamics claims to account for results. It does not attempt to explain the
principles that produce these results. It applies formulas to explain what happens when certain
events take place. The essential assumption underlying all of these formulas is that energy is
conserved. This assumption allows us to apply the equals sign with neither doubt nor explanation.
This lack of explanation has characterized physics in general for the past 75 years. The
body of collected data has increased vastly. Our understanding of it has not. We can account, but
we cannot manage.
This state of affairs suggests that we may be of the threshold of a major paradigm shift.
Somewhere, somehow, when we least expect it, some previously unknown person may emerge
with a fundamentally new pattern of thought. A few of the puzzle pieces that we have previously
considered factual may go into the trash as needless or erroneous. A new piece or two will be
added. Other pieces will be rearranged into some new pattern.
We need to examine existing patterns of thought and note their inadequacies. As
advanced and sophisticated as we like to think contemporary theories are, there are far too many
basic questions for which we have no simple answers. Any new paradigm that provides simple,
interrelated answers for a set of these questions could have revolutionary impact.
If such a revolution takes place, then we may find that the concept of energy as we
intuitively understand it will lose all relevance. Rather than finding a definition for energy, we
may find that we have no reason to seek a definition.
Today, we know that there are two kinds of heat, not just one. There is the heat of
molecular motion. There is also the heat of electromagnetic radiation, of photons. The
thermodynamic theories of the nineteenth century were seemingly unaware of this fact.
To describe the situation in the context of a piston steam engine, let us look into the hot
steam within the cylinder. True, molecules are moving about randomly at relatively high
velocities. These molecules will lose much of this velocity as pressure on the piston enlarges the
volume within the cylinder. At the same time, however, electromagnetic radiation at a frequency

12
2
2
1
MV E =
somewhat lower than visible light is also bouncing around within the cylinder. Somehow, both
motion of the molecules of steam and electromagnetic radiation will be transformed into practical
mechanical power.
Examining the function of the internal combustion engine raises the same question, but
with even greater emphasis. The immediate product of chemical combustion is photon energy, not
the random movement of molecules. Fuel will be burned, producing radiant heat in the form of
photons of electromagnetic radiation. These photons will cause all gaseous molecules within the
cylinder to move randomly with far greater velocity than before. How is photon energy
transformed into mechanical energy?
How are the energies of molecular motion and electromagnetic radiation transformed
back and forth, one into the other? We say that photons "excite" atoms, causing them to move.
This is not a satisfactory explanation. We are merely using language to mask our ignorance.
The relevant question isn't "What is energy?". Instead, the relevant question is "How do
we explain the principle of transformation?". What we really need to know is this: How are light
and mechanical power transformed back and forth, one into the other? In the case of the steam
engine, how does this machine transform the photons released by combustion into mechanical
power that we can use?
Whatever the principle of transformation is, we know that it can proceed in either
direction. It is reversible. How do "excited" atoms emit photons? Once we learn what the
principle of emission actually is, then we will know what light is. We will be better able to
explain how light moves. We may no longer need to combine the contradictory concepts of
particle and wave. As applied to light, both particle and wave are actually analogies, not
definitions. We say light is like something else because we dont really know what light is.
This currently accepted paradox, combining particle and wave, may merely be masking another
point of our ignorance.
We may even discover that the most fundamental formulas for matter (the inverse

squared law of field strength) and energy derive from a common principle. If

this happens, then both matter and energy will go into the ashbin of history, both replaced by a
single principle from which basic formulas are derived.
Our most fundamental question must relate to the emission of photons from atoms. Why
does a change in the motion of an electron orbiting around the nucleus of an atom, a shift from one
orbit to another by the electron, emit or absorb a photon? What is the exact principle by which
this takes place? We have no answer.
Physics has blessed us with many formulas that work as best we can tell. We don't know
why certain formulas work, or where we may encounter limitations inherent within them. There
may be loopholes or paths by which we can circumvent them. We need to understand principles
so that we will be able to predict the loopholes.
For example, the principle of entropy may be true when discussing the steam engine.
However, most Americans possess a technology that violates entropy: mechanical refrigeration
and air conditioning. The efficiency of a heat pump, which is merely an ordinary air conditioner
used for the production of heat rather than cooling, is measured according to its coefficient of
performance (COP). This is a measurement of the degree to which a heat pump exceeds 100
percent efficiency, thereby violating entropy. If the heat pump did not violate entropy, it would be
far simpler and at least as efficient to employ electrical resistance.

13
The coefficient of performance is measured by comparison with the direct transformation
of electricity to heat by electrical resistance. Electrical resistance is a 100 percent efficient process
that produces only heat as its product. If electrical resistance wasn't 100 percent efficient, it would
violate the conservation of energy. How could we explain the loss? Where did the lost energy
go?
Direct transformation by electrical resistance establishes the benchmark, a COP of one.
Most heat pumps have COPs of two to four, meaning that they are 200 to 400 percent efficient.
We are violating entropy.
Lacking explanations of principle, physics has all too often become more of an
accounting system than a source of explanation. This accounting system has become ever more
precise and ever more sophisticated. As an accounting system, thermodynamics has made the
dismal science of economics look bright by comparison.
We need more. We need explanations based upon clearly stated principles. We will not
be able to manage our energy situation satisfactorily until we find these explanations. We will be
stuck with a concept that we cannot define, and with problems that we will be unable to solve.
Worst of all, we will believe that we actually have a concept, energy. This belief will
attach us with medieval iron chains to our cultures past. We will forge our own chains, then
fasten them to our arms and legs and worse. We will attach these chains to our minds. We will
become prisoners to ourselves.
Energy isn't a problem that needs to be solved from within the framework of accepted,
contemporary thought. Rather, energy itself, understood as being the contemporary framework of
thought, is itself the problem.
We mortals are of but clay. It is from clay that we came, and to clay that we must return.
We are not God, the Great Architect. We are only accountable to that Great Architect. It is the
work of the Great Architect of the Universe that we seek to understand. Our efforts to understand
will be at best feeble and flawed. But even flawed stones can be used to construct useful
structures provided that we choose those stones that fit best together despite their flaws.
We should begin our quest in a spirit of humbleness, but with the hope that our humble
endeavors will produce useful results. We should not blame our God for the circumstances in
which we find ourselves. Our circumstances are not Gods doing. The Great Architects grand
plan does not include our failure.

14
CHAPTER TWO:
CREATION MYTH IN PHYSICAL THEORY

The first chapter discussed the technological roots of the concept of energy, how and why
our technological culture developed the concept. This chapter discusses the broader cultural roots
of the scientific movement itself, quite independently of technology.
We can hope to escape the limits inherent in our culture only by breaking through the
bars of our culture. However, to a substantial degree it is unrealistic to expect that we will be able
to do so. This is why I anticipate that the most pervasive and successful challenges to Western
culture will not come from within Western culture itself, but from East Asia instead. Those
outside our culture can adopt our culture in part, choosing to adopt only what they want. They are
not burdened with our cultural heritage. For them, our culture offers a buffet line, not a fixed
menu.
Although some within our culture may perceive our culture as a jail, even our cultures
most discontented prisoners have found comfort within its confines. Therein lies the danger that
we must confront. Our comfort within our culture is, all too often, the enemy of clear analytical
thought. Our thought becomes clouded by the presumptions that we, as members and participants
in our culture, make without acknowledging that we are doing so. It is easier for those outside our
culture to understand this than it is for us to understand this.
Quite independently of our technologies, our culture of today is itself rooted in our
cultural heritage. We bear not only the genes of our ancestors, but also most of their patterns of
thought. Our cultural roots have become parts of all of us, whether we choose to acknowledge
them or not. We cuddle up with our cultural heritage, quite comfortably, as if was a threadbare
and disintegrating teddy bear. We notice neither age nor disintegration, nor our intellectual
dependence upon the teddy bear. We only feel warmth and comfort.
Human cultures are superbly adapted to accepting paradox and internal contradiction
without acknowledging their existence. This is one of the compromises that every culture makes
to survive amicably and prosper. This acceptance is successfully adaptive. However, such
adaptation need not compromise the potential of our human intellect.
We need not embrace all of the accepted paradoxes and internal contradictions inherent
within our culture. Obviously, it is better that we do not. The best way to neutralize them so that
they will not cloud our thought is to consciously codify them. We already have rituals, liturgies,
and laws that codify much of our culture. We live by a code, whether we wish to acknowledge
this or not. Even our calendar is coded. What are the many holidays and celebrations? We have
encoded into our calendar that which our culture most values, that which we believe is most
worthy of celebration.
Consciously and systematically codifying our culture helps us to find its weakest, least
consistent points. The points at which our culture's paradoxes and internal contradictions coalesce
are fertile points from which to begin creative endeavors. This chapter discusses one internal
contradiction that is inherent to cultures that are of European origin, one rooted in our culture's
two creation myths.
There are two kinds of creation myth. One is the creation myth that we profess to
believe. The other is the myth by which we actually function. It is possible to profess one, but to
function in accordance with the other. This is particularly true when the one by which we function

15
was deeply rooted in the culture long before the second creation myth emerged.
When a culture professes one myth but functions according to another, there is an internal
contradiction built into that culture's world view. The twentieth century physical sciences have
been characterized by such an internal contradiction.
Surprisingly as it may seem at first glance, the physical sciences have yet to adopt a
single creation myth as the basis of their epistemology. Physical scientists profess to believe in
biological evolution, but often function as if they believe in biblical creation myth.
Despite a professed belief in Darwinian evolution, biblical creationism is alive and well
within the sciences themselves. This presence is very rarely acknowledged. Scientists who
employ biblical creationism in a de facto sense are rarely aware that they are doing so. One
cannot take one's own knowledge of scientific doctrine, and assume that science itself is
functioning in accordance with its implications.
Biblical creation and Darwinian evolution suggest different ultimate purposes for the
disciplines of the physical sciences. The difference between these two has enormous implications
for contemporary culture. The two suggest very different institutional arrangements within our
culture, and different futures for our culture.
Biblical creation myth suggests that it should be possible to attain some understanding of
divine law as it pertains to physical phenomena. This law will then be binding upon all future
cultures and upon all ages. Man can reasonably aspire to absolute truth.
As man discovers more about natural laws, he will learn more about the possibilities open
to him. He will also discover the natural constraints that will govern his future. Man will develop
those technological possibilities that are open to him, while respecting those limitations imposed
by particular natural laws.
If one projects far enough into man's future, then one can project some future state from
which little additional technological progress will be possible. As man moves into the future, the
remaining unexploited technological possibilities will inevitably become fewer, and the
constraints more obviously constraining. For example, we might experience an energy crisis.
Institutions that pursue this absolute truth by employing an epistemology appropriate to
the pursuit should enjoy an unchallenged, absolutist status similar to that of the medieval church.
Although the Scientific Revolution broke with the medieval church on some matters, it
nonetheless retained many of its ideals.
One should bear in mind that the Scientific Revolution was never a revolution against
authority. The Scientific Revolution never disputed the need for authority. Rather, the Scientific
Revolution was an attempt to replace an older sense of authority with a newer and more firmly
based authority, an absolute authority that would be binding upon all peoples and all ages. The
objective was greater authority, not less authority.
The medieval church had claimed broad authority based upon its role as the recipient and
designated interpreter of revealed knowledge, the scriptures of the Bible and the writings of the
church fathers. Experimental science claimed its authority based upon the ability of each
generation to repeat the same experiments, and to verify the same results. The difference is not as
great as it might initially appear.
Particularly in earlier times, scientists assumed that interpretations of results, as well as
the empirical results themselves, would remain essentially constant. Physical reality would be like
a book that would read the same for all future generations. As man read from the open book, he
would discover natural laws that would be valid for all peoples and all times. The validity of these
laws could be demonstrated by repeating the same experiments and obtaining the same results,

16
generation after generation. It would be this ability to experimentally demonstrate its truths to
generation after generation that would give the new experimental sciences great authority over the
domains that it claimed, domains that it claimed should be ceded to it by religious institutions.
At the root of this world view lay the concept of natural law, and a particular faith
regarding man's ability to achieve understanding of such law. Without a supporting
epistemological faith, the concept of natural law has little practical significance. Of what value
would natural laws be if man cannot discover them?
The foundation for belief in natural law did not reside in the natural world, but in a kind
of faith concerning man himself. There must have been something about the nature of man that
allowed him to discover and verify natural laws. This was where biblical creation myth, taken
partially from Genesis but subjected to great elaboration during medieval times, played a vital role
in the development of the modern sciences. Prior to Darwinian evolution, and in fact up until
fairly recently, it was biblical creation myth that provided the foundation for the epistemological
faith of the physical sciences.
In contrast to biblical creationism, Darwinian evolution suggests that we should view our
scientific investigations in terms of adaptation rather than authority. According to this Darwinian
perspective, the current state of our sciences is essentially a product of past cultural adaptations.
Our perceptions having been governed by adaptations, we can hope to better adapt by improving
our perceptions. However, there is nothing about our beings that should equip us to seek divine
truth in physical reality. One can and should assume that interpretations of experimental data are
subject to change.
If and when we find that our contemporary adaptations are not fulfilling our needs, then
we must seek new adaptations. So that we might find the most suitable adaptations, we must first
understand how and why we have adapted into our present state. We must seek to understand the
evolutionary development of our culture. This is why it is essential that the nature and history of
Western religious thought be taught as a part of any scientific education. We cannot hope to
understand physical reality by merely repeating previously conducted experiments. We must
reexamine our interpretations of them, including how and why we adopted these interpretations.
By challenging our perceptual faculties and our intellectual understandings, we can hope
to perceive possibilities that are beyond our current state of adaptation. We can hope to find that
the limits we currently perceive are unreal, and that we are therefore free to move beyond our self-
defined limits. We can hope to find a world that is more friendly to our needs, and that will
provide more compassionately for our future.

A. THE NATURE OF CREATION MYTH

Myth should not be thought of in terms of falsehood. What is important about a myth
isn't its literal truth or falsehood. A myth is merely a tale by which the teller conveys a message
about reality. What is important about a myth is not the story itself. What is important is what the
myth tells the listener about reality, and about his or her place within that reality.
10

Many scientists like to think of Darwinian evolution in other than mythical terms, just as
scriptural literalists refuse to see the creation myths of Genesis in mythical terms. Those who
wish to deny the mythical nature of Darwinian evolution claim Darwinian evolution is factual,
whereas all other creation myths are false. As logical as Darwin's natural selection may sound, we
now know that it is only one factor that has affected biological evolution. The actual process has

17
been much messier and more complex than Darwinian theory had suggested. Consider, for
example, the role that viruses play in carrying bits of DNA from one specie to another.
Even if Darwinian theory were absolutely correct and complete, however, it would still
function as a myth. We like to think of biological evolution as being a scientific doctrine. It is
myth as well.
All creation myths are creation myths. Period. It makes no difference whether the myth
is tribal, scriptural, or Darwinian. Their differences lie in their foci, not in the fact that each
functions as a myth. The foci of the various creation myths have varied from time to time and
place to place. Genesis was primarily concerned with the "who" of creation. Darwin was
concerned with the "how". The earliest myths weren't particularly concerned with either.
The primary concern of the earliest creation myths was food. The ancient creation myths
of village culture related to the agricultural calendar. The purpose of these myths was to explain
the vagarities of agricultural production, to invoke those forces that could increase production, and
to appease those forces that could produce famine. One needed myth so that one could please and
appease nature.
11

In one sense, the creation had taken place only once. In another sense, however, creation
was a pattern that was repeated each year in the agricultural calendar. Religious ceremonies
marked this calendar, and involved man's symbolic participation in the pattern of creation.
The earlier of the two creation myths of Genesis came from the ninth century B.C. village
culture of the Hebrews. It described God as being like a village craftsman, a potter. God reached
into the clay, molded it into the shape of man, and breathed life into his creation. Man was
therefore of the creation, and his body would return to it.
The later of the two creation myths of Genesis, probably from the fifth century B.C., was
more urbane and philosophical.
12
Agricultural concerns were less important. The Hebrew culture
for which it was written was far less insular. The Hebrews were now a minority culture living in
exile in urban Mesopotamia. Perhaps the most important goal of their priestly class was to
maintain the separate identity of its people in an environment that invited assimilation.
This later, more urban creation myth claimed that the God of Israel was neither a
mythical beast nor a philandering anthropoid, nor was he some distant being worthy of admiration
but beyond interaction with man. These statements refuted the beliefs of many cultures with
which the Hebrews interacted. With the exception of this one creation myth in Genesis, Hebrew
literature possessed no tales relating solely to deity or deities. Man was a part of every tale.
The concerns that this second myth addressed were both religious and philosophical.
One concern was idolatry. The Hebrews refused to accept sculptured or painted images of their
God. Such images were prohibited by religious law. The second concern was one of
understanding. How can one understand this unseen but omnipresent God? The two concerns met
in a single question: How is it that man can hope to understand this unseen deity that is present,
always and everywhere this God that cannot be confined to any graven image or geographical
location? How else could the Hebrews have found their God in Babylonian exile?
The author of this second creation myth had a simple answer: We were created in the
image of God. Therefore, we need no idols of or to our God. The matter of idols has been solved
because we mere mortals are that idol. Wherever we go, God goes with us because we cannot
leave his idol behind us. Wherever we go, we live in the world that God created for us. Our God
need not pack his bags to join us in exile. Our God was already there, waiting to greet us exiles.
No other scriptural statement has had as great an impact on the world view of the
scientific community as this one creation myth. As has generally been the case with other creation

18
myths, this particular creation myth has been used in ways that its author would never have
intended or envisioned. The author of this myth never intended to engender or sanction a
scientific movement in any modern sense. The very possibility of such a movement would have
been foreign to him.
Creation myths are developed in response to particular questions that emerge in particular
cultural settings. Once the myths are developed, however, their subsequent applications can
extend far beyond any situation that the authors had envisioned. One can describe a creation myth
as being like an intellectual virus. Once it has been released, there is little that can be done to
control either its spread or the meanings that may be drawn from it. A creation myth has the
capability to affect (or infect) just about anything.
For the physical sciences, a complete shift from biblical to Darwinian myth would have
involved radically redefining the nature of the physical sciences. Such redefinition would have
changed the ways in which the disciplines perceived themselves, and the ways in which they
functioned. Radical redefinition of such a fundamental nature never took place. Instead, there has
been a gradual transition, a de facto transition that has rarely been openly acknowledged.
As creation myth, Darwinian evolution has been gradually infecting contemporary
scientific epistemology. Many centuries earlier, biblical creationism had come to dominate
European civilization by a similar process of gradual infection. It took more than two millennia
for biblical creationism to achieve the final state of its infection. This final state took shape only
after certain questions regarding scientific epistemology had been asked, and answers developed
that interpreted the later of the two creation myths of Genesis in its most optimistic light. It was
science that brought biblical creation myth to its final, most recent form.
The authors of Genesis never intended to sanction the physical sciences. Charles Darwin
never intended to alter the physical sciences. Nonetheless, the physical sciences have been
infected and affected by both.

B. CREATION MYTH IN SCIENTIFIC EPISTEMOLOGY

Throughout history, creation myths have always fulfilled the same function, regardless of
which creation myth one selected. Creation myths tell us how we originated. Our origin tells us
who we are, what we can hope to do, and what we can hope to learn and know. This is why our
choice of creation myth is vitally important to our sciences.
Even more than telling us about our universe, our scientific philosophies tell us about
ourselves. The natural laws that we propose can in no way alter the ways in which the universe
functions. We acknowledge. We do not legislate. Because of this, we must say that our sciences
exist only for ourselves.
Biblical creation myth states that God gave man dominion over all of Gods earthly
creatures. One could hardly call this an unscientific statement. Our sciences claim dominion over
all that is within reach of our senses and our instrumentation. Everything physical and biological
is within the domain of science. Although there may be comparatively little that we can ultimately
hope to control or govern, there is nothing that we cannot hope to understand.
Western religion (J udaism, Christianity, Islam) commonly assumes that it is God who
reaches out to man, and not the reverse. When receiving the bread of divine grace and revelation,
one need not ask for the recipe. Religion accepts its own truths as having been revealed by God,
not discovered by man. Both religion and science adopt epistemological premises that they then

19
accept, mostly on faith. Because religion concerns itself primarily with statements of faith,
religious doctrine offers little that can be directly tested.
One who accepts revelation in lieu of investigation need not emphasize self-definition.
Whatever definition of self that revelation offers can be accepted passively. One revelation can
contradict another without causing undue concern. Theology can resolve such contradiction by
merely accepting paradox, equally recognizing the truth of two mutually contradictory revelations.
By contrast, science attempts to test all of its hypotheses, except for certain premises that
are essentially epistemological. Science emphasizes that which can be tested, and usually says
nothing at all about epistemological premises. Even science, however, cannot escape the
requirement that it possess an epistemological faith. The epistemological faith that science adopts
will determine the way in which it interprets its experimental data.
Science seeks to find rather than waiting for God to reveal. In science, it is man himself
who must do all the reaching. It is man himself who must make an intuitive leap into what he
believes to be a valid understanding. Science takes its creation myths more seriously and literally
than does religion because epistemology is of greater concern to the seekers than to the sought.
Religion can leave matters of epistemology mostly to God. For science, however, the question
that underlies every theory is ultimately epistemological.
A religion can function with a loosely defined creation myth. A scientific discipline
cannot. The more actively that one is seeking, the greater the importance that one must attach to
one's self-definition and self-perception. Scientists cannot, in their pursuit of science, depend
upon divine guidance. Hence, the greater the importance of creation myth. Science cannot say
anything about knowledge without first saying something about the knower.
The early development of the physical sciences took place before Darwin published his
Origin of the Species in 1859. Prior to 1859, European culture had no accepted creation myth
other than the biblical, accompanied by its medieval elaborations. Consequently, the physical
sciences developed their epistemology along the lines of the biblical creation myth.
Over the centuries, European culture had taken the short and simple creation myth of
Genesis and had greatly elaborated on it. The scientific movement accepted this structure of myth
quite literally and placed great importance in it. Had the scientific movement not needed elements
of this myth to support its quest for legitimacy and knowledge, it is doubtful that biblical
creationism would have received the literal emphasis that it did.
Indeed, one could argue that the contemporary Creationist, anti-Evolutionist movement
was actually rooted in this earlier period of the scientific movement. Creation Science was,
indeed, science. Scriptural creation myth, accepted as literally true, was at the core of science. It
was science rather than religion that insisted upon this literal reading of scripture. Had science not
done this, it is likely that our culture would have continued to embrace its medieval pastiche of
creation mythology, a pastiche that did not require scriptural literalism. It was science that
insisted upon limiting Creationism to a literal reading of scripture.
When arguing with a religious establishment regarding the legitimacy of one's endeavor,
it is reasonable that one should borrow from the religious establishment's own myths. Even in the
absence of any such dispute, one can most easily persuade others by relating one's endeavor to the
pervasive myths of the culture.
Today, few scientists acknowledge the presence of biblical creationism in scientific
epistemology, much less profess to believe in it. Nonetheless, Creationism remains a myth by
which the disciplines commonly function. The disciplines are able to do so because they never
acknowledge the presence of this myth in their patterns of thought. One finds Creationism not in

20
direct statement, but rather concealed in accepted scientific language and in underlying
epistemologies.
If one codifies these epistemologies and traces them to their origin, one finds
Creationism. These epistemologies are ultimately based on certain articles of faith. The articles
of faith are rooted in scripture generally, and especially in scriptural creation myth. If one were to
take these articles of faith and compare them with the epistemological implications of Darwinian
evolution, one would be forced to conclude that the particular scientists have been guilty of false
optimism.
The danger of this false optimism does not lie in the optimism itself. Optimism is
necessary. If we are deeply pessimistic about the prospects for a particular endeavor, we are
unlikely to attempt it. But there is a danger inherent in optimism. False optimism encourages us
to believe that we know more than we actually know, that our answers are more certain than they
really are. False optimism encourages inflexibility. We are encouraged to believe that the options
open to us are more limited than they actually are. We limit ourselves.

C. CREATIONIST ARTICLES OF FAITH

The creationist doctrine that infected the physical sciences prior to Darwinian evolution
contained three basic articles of faith. First, God created the universe and all that is within it.
Having completed his creation, he now governs by divine law rather than by arbitrary caprice.
Hence, the universe is orderly and predictable. It was indeed important that God had rested on the
seventh day!
Second, man was created in the image of God. Because of this, he shares certain
cognitive and perceptual faculties with his deity. This being the case, man should perceive much
as God perceives. Man should be able to discover natural laws by staging demonstrative
experiments. Once the demonstration takes place, man should have little difficulty perceiving the
relationship between observed results and the natural law that controls those results.
Third, God created language as well, although not necessarily the language that we now
speak. God's word does not constitute mere expression. His word has special power. One can
speak of natural laws as being the word of God. Somewhere back in the earliest times, God spoke
to man using divine language, and man was able to understand. Somewhere within the language
that we speak, there exist vestiges of this divine language.
By employing proper method, we might therefore hope both to discover true natural law
and to recover some of that divine language. Although blinded by the fall of man, we might yet
hope to regain a part of our vision, the potential for which still resides within us.
13


D. CREATIONISM AND NATURAL LAW

The concept of natural law that so powerfully directed the history of the physical sciences
was a sixteenth and seventeenth century European concept. Natural law was a part of divine law,
which was the basis of the cosmic order and of its harmony. To quote from Richard Hooker, an
English priest and influential writer who died in 1600:

of law there can be no less acknowledged than that her seat is the bosom of God, her

21
voice the harmony of the world: all things in heaven and earth do her homage, the very
least as feeling her care and the greatest not exempted from her power; both angels and
men and creatures of what condition soever, though each in different sort and manner yet
all with uniform consent, admiring her as the mother of their peace and joy.
14


According to faith in natural law, God has declared the law. God himself respects the
law that he has declared. God rested on the seventh day, thereby allowing the words that he had
spoken to assume management over the world that he had created. Physical objects that lack
minds and wills of their own have no choice but to follow the divine law of physical reality. God
has spoken. His word has power. Physical objects have no choice but to comply with his word.
Physical reality is therefore governed by natural law. According to this concept of
reality, natural law is not separable from all other forms of divine law. Rather, all forms of
divinely ordained law are part of the same unified whole, the divine order. The individual forms
are, however, describable according to the domains they govern. Scriptural law that governs man
and society does not govern the same domain as does natural law. All divinely ordained law,
however, remains part of the divine order.
To quote again from Richard Hooker:

That part of it [law] which ordereth natural agents we call usually natures law; that
which angels do clearly and without and swerving observe is a law celestial and heavenly
[It was believed that angels lacked free will.]; the law of reason, that which bindeth
creatures reasonable in this world and which by reason they may most plainly perceive
themselves bound; that which bindeth them and is not known but by special revelation of
God, divine law; human law, that which out of the law either of reason or of God men
probably gathering to be expedient, then make it a law.
15


By extension of a long-established pattern of mind, the concept of natural law emerged
from the earlier concept of scriptural law that already governed the individual person, society, and
the religious culture. The differences between scriptural and natural law were based upon
differences between the domains governed by the two categories of law. Man possessed at least
some degree of free will, whereas angels and inanimate objects lacked free will. Lacking a free
will of their own, such beings and inanimate objects had no choice but to follow the will of God.
This growing faith in natural law as the force governing inanimate physical reality
encouraged belief in the unerring predictability of physical events, and ultimately in their
quantifiability. Centuries of practical experience with astronomy, one of the seven liberal arts and
the queen of the early sciences, encouraged this faith.
Under identical circumstances, the same results should occur every time. By carefully
defining and controlling the circumstances, one could learn about particular natural laws. The
experimental method would lead to the discovery and verification of natural laws. According to
Francis Bacons influential writings of the early seventeenth century, science would progress by
inductive reasoning. Working from observation and experimentation without preconceived
notion, man would be able to discover particular natural laws. Man merely needed open eyes and
an open mind.
16

Physical reality was an open book that man could read if only he employed the correct
methodology. Galileo himself, a contemporary of Bacon, employed this analogy between physical
reality and the open book to justify his explorations. If one could read directly from the open book

22
of God, then one need not depend upon the previous writings of man. As man read from the open
book, he would discover natural laws that would be valid for all peoples and all ages. Each
generation could repeat the same experiments, and verify the same results.
17

The naivete of this faith aside, faith in natural law posed one serious methodological
problem: an inability to adapt. Scriptural law that pertained to man and society had a key
advantage over the later natural law in this one respect: Scriptural law could be reinterpreted to fit
new situations.
There was enough flexibility built into the structure of scriptural law to incorporate
change without the concept of scriptural law itself facing serious challenge. Man had free will.
Laws written for beings with free will took this freedom into account.
Scriptural laws were prescriptive, not predictive. Prescription was a function of the value
judgments that scriptural law made concerning the choices of action available to man. Scriptural
law described how man should act. It did not take it for granted that man must act in accordance
with divine law. Furthermore, scriptural law left open the possibility that particular laws might be
de-emphasized in favor of others in later times should priorities need to be reconsidered. The
written prophets of the later centuries of the Old Testament frequently engaged in such
reconsideration.
By contrast, natural law as it pertained to physical reality was predictive rather than
prescriptive. Once established in the human mind, each particular natural law mandated particular
predictions. Natural law, being archetypally vested with the power of God, dictated that future
natural laws added to the body of natural law comply with those laws already accepted. This
dictate placed great constraints upon the patterns of possibility open to the human mind. Faith in
natural law dictated that continuity with existing patterns of thought be favored over divergence
from accepted patterns.
This is the pattern of normal science found in the writings of historian of science T. S.
Kuhn, cited later in this chapter. The paradigms of the various disciplines in normal science have
great governing power over their respective disciplines. Two observations are in order. First, this
inordinate degree of power is consistent with this older faith in natural law, the faith of Francis
Bacon and Galileo. In part, the power of the paradigm may still be a vestige of this older faith.
Second, the power of the paradigm reinforces our evolution-based and inherited tendency
to place great faith in the patterns we find residing in our minds. The power of the paradigm and
its ability to become a mind-dominating archetype is discussed later in this chapter.
In recent decades, we have retained the term natural law, although what we mean by
the term has changed. The natural law of today is not natural law as it was understood in the
seventeenth century. Natural law, formerly the word of almighty God, is now little more than
observed natural tendencies. At least officially, we no longer claim to read the mind of God from
some open book.
In place of the older faith in a man created in the image of God and capable of reading
from Gods open book, we now possess no doctrinally defined faith at all. Instead, we have a
generalized optimism concerning mans faculties and the potentials of his intellect, a generalized
optimism that no longer has any foundation in doctrine. In a situation in which the foundations of
faith are no longer defensible, the easiest course of action lies in watering down the tenets of the
old faith. We retain the language of the faith, even though its words have been stripped of their
essence.


23
E. MAN CREATED IN THE DIVINE IMAGE

When one mentions biblical creation myth, one might reasonably assume that one is
discussing a myth taken from Genesis. In fact, the creation mythology in Genesis is brief and
sketchy. Most of what passed for biblical creation myth during the Middle Ages, the Elizabethan
period, and beyond did not come from Genesis. It was instead a pastiche of other themes and
legends taken from a range of sources, mostly nonscriptural. Medieval creationism was a pastiche
of scripture, legend, literature, and art and of the imaginations of Christian writers from earlier
centuries.
Medieval creation myth supported two themes that were very closely tied together. One
was the concept of a world order, a universe created by God and ruled by his laws. The other was
the theological scheme of sin and salvation. Because these two themes were so closely tied
together, creation mythology included the fall of man as well as his creation.
In discussing the world view of the Middle Ages and the Elizabethan period, E.M.W,
Tillyard noted the following:

the part of Christianity that was paramount was not the life of Christ, but the orthodox
scheme of the revolt of the bad angels, the creation, the temptation and fall of man, the
incarnation [in Christ], the atonement, and regeneration through Christ.
18


One can break this scheme into three separate themes. The first was the creation itself.
Man was created in the image of God. As interpreted by earlier scientists including Francis Bacon
and Galileo, this meant that man should possess certain perceptual and intellectual faculties that
his creator also possessed. As limited as these faculties might be when compared with those of his
creator, they should nonetheless be like those of his creator. Man could therefore assume that,
although he was seeing less than his God would see, he was seeing in much the same manner as
his God.
The second theme was the fall of man. Man had experienced his fall from grace at the
Garden of Eden when Adam and Eve ate the forbidden fruit. They and their human successors had
thereby been blinded to some degree. Preachers and poets could bemoan the great degree to
which the human specie had fallen short of its divinely granted potential.
For scientific epistemology, this scheme of creation and fall was particularly useful. One
could claim that man had been given faculties like those of his God, while at the same time
acknowledging that there was a great gap between God and man. Originally given vision like his
God, man had been very substantially blinded. This scheme allowed scientists to claim that man
had a great unrealized potential, his sciences, something that he could yet hope to attain.
The myth of the fall suggested that somewhere within man his God-like faculties still
existed. To make use of them, however, man needed to find the means to overcome his blindness.
That was how Francis Bacon saw the human situation. He proposed that the new sciences based
upon inductive reasoning would allow man to overcome much of his blindness.
Francis Bacon was not the first to suggest that man might overcome a part of his
blindness by observing nature. Two centuries earlier, Raymonde de Sebonde advocated this
possibility in his Natural Theology:

The poor wanderer, wishing to return to himself, should first consider the order of things

24
created by the Almighty; secondly, he should compare or contrast himself with these;
thirdly, by this comparison he can attain to his real self and then to God, lord of all
things.
19


A portion of Sebondes popular and widely circulated work was finally banned by the
Roman Catholic Church as heretical, although not because of his advocacy of scientific
examination. The prologue to his work was placed on the Index of Prohibited Books by the
Council of Trent. What led to this ban was his contention that he had discovered in nature all the
truths contained in scripture. What this suggested was that divine revelation through scripture was
unnecessary.
The third theme was the Pauline theology of the redemption and regeneration of man
through Christ. The sin of Adam had rendered man subject to the old laws that Christ came both
to fulfill and set aside. Redemption and regeneration was to come not from adherence to the old
laws, but rather from the very being of Christ himself.
According to the beliefs of the church, the sacraments of the church were necessary to
facilitate the redemptive and regenerative powers of Christ. With regard to the possibility that
some degree of redemption might be attained by studying Gods created world, the church was
capable of accepting a wide range of opinion. Within the framework of accepted church
teachings, there were places for both the most pessimistic and ascetic of monks and the most
optimistic of scientists. In their optimism, the scientists were not actually departing from the
medieval world view, but were merely expressing one possibility present within in.
As Tillyard observed:

This double vision of [1] world order and [2] the effects of sin was the great medieval
achievement. Its great strength is that it admits of sufficient optimism and sufficient
pessimism to satisfy the different tastes of varies types of man and the genius for
inconsistency and contradiction that distinguishes the single human mind. Genesis
asserts that when God made the world he found it good and that he created man in his
own image; but that with the Fall both man and (it is hence deduced) the universe were
also corrupted. It was agreed that some vestiges of the original virtue remains; but the
proportions could be varied to taste.
20


When one examines the twin themes of mans creation in the image of God and his fall in
the Garden of Eden, one can readily understand why the intellectual methodology that the physical
sciences adopted paid little heed to cultural history. Methodology considered human culture to be
entirely irrelevant.

F. TRANSCENDING CULTURE

The goal of science was to transcend culture, to leave culture behind. As they were
discovered, natural laws were to be discussed relative to the will of the creating deity and the
world that he had created. Such laws necessarily governed human culture, but were not a product
of it.
There was an obvious parallel between this attitude and the Pauline theology of
redemption through Christ. Paul advocated leaving the old law behind, the law that had

25
previously governed J ewish religious culture. The culture could not redeem itself, but must
instead be redeemed by the person of Christ. In a similar way, science advocated leaving the
culture itself behind. Science would seek its redemption beyond the boundaries of human culture,
in the immutable, the eternal, the uncontaminated and unpolluted will of God.
This faith manifested itself in the format and pretensions of a later form of literature, the
scientific textbook. When discussing the textbook as a form of literature, we should bear in mind
that its roots date back at least two centuries. As a form of literature, it is clearly pre-Darwinian.
Darwinian evolution did little to modify the form of this literature. Biological evolution is itself a
subject that is discussed within this pre-existing literary form, a form that had been inspired by the
medieval version of biblical creation myth.
Scientific textbooks have historically given little attention to the cultural, linguistic,
technological, and religious roots of the paradigms they discuss. This omission is understandable
when one notes that the goal of the various disciplines is to reach beyond the confines of culture.
The goal of science is to achieve understandings of natural phenomena that themselves extend
beyond culture, both in space and time. Textbooks claim that we have already made progress
toward his goal, and that further efforts using the scientific method will make further progress.
Therefore, textbook authors assume that they can justifiably ignore human culture as a factor
relevant to the scientific disciplines.
With regard to the human intellect, the most important lesson that we should draw from
biological evolution is that true human objectivity is impossible. Nothing about our beings has
equipped us to perceive the immutable and eternal laws of nature. Rather, we were equipped by
our evolutionary past to live and function within a particular natural environment. The perceptual
faculties given us at birth equip us for survival, not for intellectual discovery.
Kuhns analysis of the nature of the history of science recognizes this to be the case,
although he never describes the human situation in terms of mans evolutionary origins. One finds
this recognition implicit in the terminology that Kuhn chooses. In earlier times, scientific
historians wrote of the discovery of natural laws. Kuhn wrote instead of the development of
paradigms, and of the episodes when earlier paradigms were discarded and replaced. This shift in
terminology is highly significant.
If one believed in biblical creation myth, then one could reasonably say that man
discovers natural laws. In making the statement, the writer assumes the following: The natural
law was ordained by God. The discovery of it involved perception and recognition rather of it
rather than an exercise of human creativity. The law was not created by man. It already existed.
These acts of perception and recognition were based upon human faculties given to man
by God. Because God gave man these faculties independently of either natural or cultural
environment, the exercise of them can take place independently as well. Therefore, both the
process of discovery and the later appreciation of the significance of the natural law can and do
take place outside the context of everyday human physiology and culture.
Prior to Kuhn, the textbook writer who wrote about the discovery of natural laws would
reasonably ignore the role of human culture in the development of his discipline. The goal of the
textbook writers discipline was to transcend human culture. The writer believed that the previous
history of the discipline had already involved a substantial degree of transcendence. Where
transcendence has taken place, the human culture that has been transcended can properly be
ignored. Under such circumstances, to give undue recognition to the role of human culture in the
discoveries themselves might suggest that the discoveries themselves had not, in fact, been
transcendent.

26

G. THE NATURE AND IMPORTANCE OF LANGUAGE

The third of Gods great creations was language, although not necessarily the language
we now speak. Prior to Darwin, theories of the origin of language paralleled those of the origin of
man. The tales of Genesis portray God as having spoken to man shortly after the act of creation.
This having been the case, then who created language and gave man the power to understand it?
The answer is obvious: God. Why did God give man the power to use language and to
think using its words? Again, the answer is obvious: because God himself spoke and thought
using language. Language was a part of the commonality between man and God, a part that God
shared with man alone among all of his temporal creations.
In such a context, man could easily perceive natural law as being linguistic in character.
The word of God has special power, the ability to effect fulfillment of those words. God said,
Let there be, and there was. Gods linguistic words became the law of the universe. Galileo
could therefore speak of the universe as being a divine book open for mans reading.
God spoke with man, sharing with him the linguistic essence of reality, although not
necessarily the exact words of particular divine truths. The law of scripture was therefore like the
natural law of the universe, differing only in the domain to which it applied. Man, possessing both
language somewhat like the word of God and senses that approximated those of his God, could
therefore hope to discover and write down certain natural laws of physical reality. Today, the
mathematical formulations of physics still exude an aura of divine truth.
The tales of the creation of language paralleled those of the creation of man himself in
another respect: J ust as language had been created by the divinity in the pattern of the divinity
himself, there had been a fall. The fall of man took place in the Garden of Eden. The
fragmentation of language took place at the Tower of Babel.
Like the myth of the creation and fall of man, the myth of the creation and fragmentation
of language supported a wide range of opinions regarding the potentials of mans language. One
could be deeply pessimistic about the future possibilities, or one could be wildly optimistic. The
pursuit of scientific understandings being an essentially optimistic pursuit, most scientists tended
to be optimistic about the potentials of scientific language.
Western though has been characterized by extreme claims in the directions of both
optimism and pessimism, often within the work of the same person. On one hand, Plato expressed
pessimist concerning the deficiencies of human language and perception, likening what we can see
to mere shadows on the wall of a cave. On the other hand, however, he proposed the concept of
Ideal Forms.
These Ideal Forms, existing outside the realm of human experience but at least partially
accessible to the human mind, were the purest abstractions of the meanings of human words.
There was a correspondence between the human words and the Ideal Forms to which they applied.
Therefore, human language with all its deficiencies had at least the potential of reaching toward
the transcendent.
One finds similar belief in the transcendent possibilities of human language in the late
medieval doctrine of plenitude, a Christian doctrine that was Neoplatonic in its origin. The
essential claim of this doctrine was that God, lacking envy, had created everything that it was
possible to create, and that there was goodness in the resulting diversity. Like so many other late
medieval and Renaissance concepts, plenitude was subject to a wide range of interpretations,

27
particularly regarding any restrictions that it placed on the free will of God.
With regard to both the patterns residing in mans mind and the capabilities of his
language, plenitude suggested that man could envision and describe certain of Gods creations that
man had never actually seen or experienced. The reach of our intellects and our language could
therefore extend beyond the range of our senses. In making this claim, plenitude invited man to
contemplate the incredible diversity of Gods creation, and to reach out with both his intellect and
his language to embrace this diversity.
J ust as man could hope to discover natural laws that transcended human culture, he could
also hope to express these laws in language that would also transcend human culture. Therefore,
one need not examine that language of science in an attempt to discover the cultural roots of
accepted scientific concepts. To do so would not only be unnecessary, but it might also be
counterproductive: To acknowledge the cultural roots of the language of science would be to
acknowledge the less-than-transcendent nature of its paradigms.
J ust as scientific histories and textbooks prior to Kuhn saw little reason to discuss the
cultural roots of scientific concepts, they also saw little reason to discuss the linguistic roots of
these concepts. Words have histories, but those histories were not particularly relevant to our
understanding of scientific paradigms. Language had the ability to transcend culture, rendering
cultural explanations of language unnecessary.
The fragmentation of language at the Tower of Babel was a central concept in eighteenth
through mid- nineteenth century philology. If one could recover a portion of that language that
had existed prior to fragmentation, then one could attain a degree of access to the mind of God. In
an ideological sense, this search was every bit as scientific as was the examination of physical
reality by the experimental method.
Early in the nineteenth century, it appeared that this quest was moving toward substantive
results. Examination of Sanskrit, an ancient language that was still the language of religious
writings in India, showed close parallels with Hebrew, Greek, and Latin. From this, it was
possible to conclude that they were all derived from a common source, a language that had existed
prior to fragmentation. The existence of an Indo-European language group allegedly validated the
biblical tale of the Tower of Babel.
Development of scientific terminology in the eighteenth and early nineteenth centuries
was governed by both the educational ideals of the Renaissance and the emerging discipline of
philology. In its educational ideals, the Renaissance had glorified the study of ancient Greek and
Latin writings, and had greatly encouraged education in these languages. The study of these
languages had remained an essential part of any program of higher education.
The expectation that all educated persons would share in the knowledge of ancient
languages, languages that were the vernacular of none, established a fertile climate within which
scientists could develop new words and concepts. If one wished to say something while avoiding
the usual connotations found in the contemporary vernacular, one could say it in the simplest way
using one of the ancient tongues.
Escaping vernacular connotations in this manner enabled scientists to invest their
terminology with an aura of transcendence. Having rooted the etymologies of their words in the
nonvernacular, scientists could claim that their concepts were themselves rooted in something
other than their contemporary cultures. Their linguistic strategy therefore supported their claims
to transcendent knowledge.
This linguistic strategy was reinforced by the faith of the philologists. Adopting their
beliefs from a literal reading of the Bible, nearly all philologists believed that there had been a

28
single, divinely created language prior to the Tower of Babel. It would therefore be possible to
come closer to transcendent understanding by returning to the earliest meanings of ancient words.
One way to achieve an understanding more transcendent than Aristotles was to return to a
language earlier than Aristotles

H. ENERGY AS LINGUISTIC STRATEGY

One can find this linguistic strategy present in Thomas Youngs adaptation of the term
energy into physical theory. In its earliest, pre-Aristotelian etymological roots, energy meant
simply in work. When we define energy as anything that can do work, we are referring
directly back to that earliest of etymologies. At the same time, we are employing a word usage
that was not a part of the vernacular in Youngs time. These earliest of roots employed in a
nonvernacular way helped to vest energy with transcendent value.
The nineteenth century understanding of energy did not specify what it was that was in
work. It merely acknowledged the existence of physical work that need not be done by human
muscle. If falling water was substituted for human or animal muscle power to turn a mill wheel,
what was obtained was work just as if it had been done by muscle power.
The concept of energy emerged in response to the presence of work that was being done
without muscle power. It emerged with the assistance of particular ideas regarding the proper
nature of education, the faith of the pre-Darwinian philologists, and the use of a widely sanctioned
linguistic strategy.
This new word removed not only the connotation of the everyday vernacular, but also
man himself. Thanks to the creation of a new word, work became abstracted to such a degree that
it no longer needed any connection at all with the activities of man. In this sense, the concept of
energy was a vastly more inclusive concept than any that had preceded it.
Energy shifted the focus of thought from man and his activities to the physical universe
around him, a universe that included man. This was an altogether fitting shift for an endeavor that
sought transcendent understanding. Something abstracted from everyday human activity, but
intentionally distinguished from it, could be used to explain certain mechanical events in the
physical universe.
Like the faith in natural law, faith in the transcendent power of a linguistic strategy
placed definite limits on the possibilities open to science. The greater the degree to which one
believed in ones linguistic strategy, the greater the degree to which one was stuck with ones
terminology.
One did not merely discard one term in favor of another as science progressed. Rather,
one kept attempting to redefine the existing word. Because the word itself points toward
transcendence, it must be retained and redefined. Our failure to understand a word resides in
ourselves, not in the word. It is therefore appropriate that we make further effort to understand the
word. We attempt to achieve transcendence by learning to understand the transcendent meanings
of existing words - not by inventing new words or concepts to replace them.
By the standards of today, the technological culture within which the concept of energy
was first abstracted was primitive. Despite development of the early steam engine, the primary
sources of inanimate mechanical power were still falling water and blowing wind.
Man could warm himself in the heat of the sun. He could build fires on which to cook,
bake bricks, and make iron. He had only begun to convert the heat of fire to mechanical power.

29
He could look up at the sky and fear the strange and awful power of the lightning bolt.
Electromagnetic technologies, however, were still beyond his imagination.
Not long after the concept of energy was first developed, technological advances forced
man to broaden the concept. Development of the steam engine, which had begun in the early
1700s, had forced scientists to include heat as a form of work. Obviously, heat itself in no way
resembled work. Nonetheless, it had to be included because it could be transformed into work.
Anything that can be transformed into everyday work must be included as a form of work
as well, even though we may not be able to transform certain units of that energy form into work.
Energy is anything that can in theory be transformed into work, even though it may not be
practical to do so. Development of the electric motor forced inclusion of electricity and
electromagnetic phenomena. Splitting the atom forced inclusion of matter itself whenever matter
was destroyed, producing energy.
As inclusive as the concept of energy was in its context in the early nineteenth century, it
was quite limited and uninclusive by the standards of today. However, the concept was then more
readily comprehensible in a practical, everyday sense than it is today. It is the extreme
comprehensiveness of energy that makes it impossible for us to define it. Today, what we
comprehend most readily about energy isnt some simple definition of what energy is, but how
energy functions in contemporary physics.
Energy is the concept that allows us to equate one quantity of one phenomenon with an
equivalent quantity of some phenomenon that appears to be entirely different. We can, for
instance, equate heat with electricity, and mechanical power with heat. We can even equate mass
with heat and power. We have equations that tell us how much of one equals how much of
another, establishing transformational equivalences. Comparing these equations with the practical
outputs we obtain from our technological devices tells us how efficiently our devices are
functioning.
Energy is a concept that invites us to seek some underlying unity among all the
phenomena that we can mathematically equate. As such, energy is less of a denotative concept
than it is a perspective from which we attempt to define the physical realities of our world.
Understood in this sense, energy is not the answer, but rather the question.
At its simplest, most archetypal level, the division of physical reality into matter and
energy is nothing more than the division of language into nouns and verbs. Matter is the noun.
Energy is the verb.
One of the greatest achievements of modern physical theory is that it has accorded the
verbal reality, energy, equal status with the noun, matter. Ancient thought was strongly biased
toward nouns, never affording this equality of status. Physical theorys assertion of equality ranks
as one of the greatest conceptual advances in the history of human thought. However, affording
noun and verb equal status can in no way abolish the distinction between the two.
Nothing is more basic to linguistic structure than the division of reality into nouns and
verbs. Consequently, nothing is more basic to physical theory than the division of reality into
matter and energy. Our structure of theory is based upon the structure of our language.
We have no reason to believe that physical reality must be structured according to the
patterns that structure our language. This being the case, efforts to unify matter and energy are
misguided. Such efforts will inevitably collide with the structure of language itself, which is what
provides the archetypal basis for the division.
Rather than attempting to reconcile and unite matter and energy, it would be better to
employ a single concept that is not archetypally distinguishable as either noun or verb. I propose

30
to use the concepts of information, information processing, and the experiencing of information.
As I employ them, they refer only to experiential phenomena. They are not directly concerned
with structure. Structured matter, the noun, and energy, the verb, are both subordinated to the
experiencing of information.
There is nothing transcendent about our ill-defined concept of energy. Rather, energy is
an invitation to pursue a transcendent understanding of the unifying principles of the universe.
Within the context of an evolutionary understanding of the nature of man and his language, it is
obviously unrealistic to believe that we can achieve such an understanding. However, this does
not mean that we should not try.

I. THE IMPACT OF DARWIN

Darwinian evolution has proven particularly influential because it was the first genuinely
original creation myth to have emerged in more than two millenia. Darwinian evolution was also
unique in that it was probably the first creation myth that was not intended to be anthropocentric in
its focus. Developed to explain differentiations among various different but obviously related
species living in different but contiguous environments, its application quickly spread to other
areas of human thought.
Although a consciousness of the existence of evolution had begun to develop in both
linguistics and biology prior to Darwin, it was Darwins work that first proposed a plausible
mechanism that explained why biological evolution took place. Because it proposed to explain the
mechanism for biological evolution, the work of Darwin and his successors powerfully affected
the human imagination.
Prior to Darwin, the old creation myths taught by the church had taught scientists how
and why they could hope to achieve authoritative understandings of physical reality. With
biological evolution, scientific doctrine began to challenge the authority of science itself, although
science had no desire to challenge itself.
The implications of biological evolution could not be confined to the discipline of its
origin. The new theory useful within the biological disciplines inevitably raised questions about
the nature of the biologist and the physicist as well.
According to evolution, both mans senses and his mind had evolved in primitive
circumstances from biologically more primitive antecedents. His faculties had developed to help
him survive in a primitive environment. Nothing about this origin of his faculties suggested that
he should be able to share in Gods understanding of our universe. Nothing about this evolved
beings language should make such understanding easy or self-evident.
Rather than being a potential asset, our evolved past would more likely place stumbling
blocks in our path. Faculties developed for survival were never intended to achieve abstract
understandings. Although evolutionary doctrine had proven to be a useful tool within the
biological sciences, its implications were a dagger pointing at the heart of the physical sciences.
The response of the physical sciences was therefore predictable.
The response from within the physical sciences was a nonresponse. Physical scientists
continued as if little had changed. Physical theorists continued to believe that they were building
the great intellectual edifice of modern times. Evolution was a scientific doctrine, not a creation
myth that should alter their understandings of themselves and their pursuits. It was a doctrine that

31
should be included in scientific textbooks, but not one that should alter the textbook itself as a
form of literature.
Ironically, biological evolution posed a more serious threat to the scientists than to the
church. Medieval creation mythology sanctioned a wide range of responses concerning human
potential, from deeply pessimistic to wildly optimistic. When a new doctrine such as evolution
challenges the more optimistic, the church can do as most mainstream theologians have done. It
can accept evolution as a scientific fact, but contend that some underlying message present in
scripture and legend remains intact and valid. The church can always fall back upon its advocacy
of humbleness, long a tenet of Christianity.
Physical scientists had based their endeavor upon a highly optimistic interpretation of
traditional creation myth. Optimism had long been considered essential for the legitimacy of their
endeavor. They had long advocated an exalted image of mans perceptual and intellectual
faculties, an image that was now inconsistent with doctrines of biological origin. To a greater
degree than the traditional creation myths themselves, it was this optimism that was now under
attack.
For the physical sciences, what was ultimately at stake was our understanding of the
intellectual significance of the experiment. Was the essential nature of experimentation
demonstrative or interpretive? Traditional creation myth as read by the scientists had suggested
that interpretation should not be an overly difficult task because man was well equipped to make
valid interpretations. The essential function of experimentation would therefore be demonstrative.
The primacy of demonstration is a theme that still dominates most science textbooks in the
physical disciplines.
Darwinian creation myth suggested quite the opposite. Nothing in mans being equipped
him to engage in valid interpretation. He could observe, but the meanings of his observations
would be far from self-evident. Because observation itself inevitably involves interpretation, he
could not even be certain that he was observing accurately.
Demonstrative experimentation disciplines interpretation by ruling out many of the more
fanciful interpretations, but does not in and of itself provide answers. Our answers are
interpretations of what we believe to be the significance of what we believe we have perceived.
The greatest difficulty we encounter with interpretation is that our interpretations are
governed by the questions we ask. The ways in which we frame our questions determines the
answers that we get. We frame our questions from within the contexts of our cultures.
Consequently, the interpretations we generally reach at any given time are those that are the most
consistent with our contemporary cultural understandings.
Interpretation inevitably involves a degree of circular reasoning. The questions we ask
determine the range of possible answers that we can consider. The answer will therefore be
consistent with the question. Because the answer is consistent with the question, it tends to
reinforce our perception of the validity of the question.
We cannot really separate question and answer from each other. The two become
inextricably meshed with each other into a single, inseparable entity that is closely related to the
contemporary culture.
The question is a matter of interpretation just as much as the answer is a matter of
interpretation. How did scientists first interpret phenomena that they were encountering in their
laboratories? Why were these interpretations, called paradigms by Kuhn, adopted? What were the
difficulties that later scientists encountered when they applied these paradigms to other
phenomena? Why were earlier paradigms replaced by later paradigms in revolutionary paradigm

32
shifts? How does physical theory cope with the inadequacies of its paradigms?
As a historian of science, Kuhn did his best to retain faith in the existence of a Scientific
Revolution. To preserve this faith, he found it necessary to explain the Scientific Revolution as
involving a continuing series of paradigm shifts, a series of minirevolutions within the various
disciplines. A historian of religion, examining the same historical data, might justifiably have less
faith in the existence of a Scientific Revolution.
Instead, a historian of religion might see in science a culmination of a set of earlier ideas
and possibilities, a continuity with the past rather than a revolution against it. For the historian of
religion, science began with a sense of its own possibilities a sense in search of some later
understanding of its own limitations. The history of religious thought has been filled with ideas
that were intended to solve all previous problems, but that then raised new problems of their own.
For those who wished to compose a historiography to justify the new institution of
experimental science within Western culture, the hypothesis of a revolution provided a convenient
pattern. The justification for the new institution lay in the revolution itself. Evolutionary
developments within the physical disciplines then followed the revolutionary beginnings, just as
growth follows birth.
This pattern of historiography was nothing new. Anyone familiar with the Old Testament
will find it quite familiar. After all, Moses hadnt climbed Mount Sinai without institutional
cause. Every movement has to claim a beginning.

J. EVOLUTION AND CREATIVITY

Scientific paradigms are creations of man, not creations of God. We should therefore
speak of man as creating paradigms, not as perceiving or discovering divine laws. Our creations
will most likely fall well short of transcendence.
Our evolved past is our inescapably original sin. It is the sin of our origin. However, this
original sin need not doom us to failure provided that we are willing to wrestle with it.
When examining scientific endeavors from our perspective as biologically evolved
beings, we should first ask what it is about us that will most likely doom us to failure. It would be
inappropriate for us to ask what it is about us that should allow us to succeed.
There is something primitive about the human mind that enables us to create abstract
paradigms, and that subsequently gives these paradigms great power over us. There must be some
reason rooted in man himself why we are driven to create paradigms, and why we then allow these
paradigms to become psychological archetypes.
Paradigms are not toys that the mind can objectively manipulate. The paradigms that
man creates have the power to dominate his mind. They definitely have their psychological
hazards.
Intuitively defined patterns become like the mind dominating archetypes of J ungian
psychology. Our thoughts are governed by patterns whose validity we do not challenge, but
whose contents we may not be able to define. This lack of clear definition leaves us with no solid
foundation from which to challenge and redefine our understandings.
The concept of energy as we use it in our culture has acquired great power over our
minds because it fuses two earlier archetypes, both of which are individually quite powerful.
Together, the two reinforce each other. One archetype, natural law, is structural and
epistemological. It seems readily and religiously believable. The second, water power, is

33
technological, easily comprehensible, and can be easily seen in the minds eye. Because we can
see it so easily in our minds, we find the archetypal pattern easy to apply.
With only the rarest exceptions, the true goals of the process of creative thought are
preservation and continuity, not change for the sake of change. As we subconsciously seek new
patterns to solve particular problems, what emerges are those patterns that seem to have the most
in common with already familiar patterns. As a general rule, the new patterns are merely long
familiar patterns that we apply to new situations, variations on old themes.
Historians of science have noted the existence of a common pattern in the various
episodes of creative insight that noted theorists have experienced. The theorist is engaged in
nothing of practical consequence at the moment of the insight. Insights commonly take place in
bed, the bath, or the bus the famed three Bs. The insight is immediate and startling. The theorist
visually sees some perplexing problem solve itself in front of his eyes, even though none of the
objects involved are actually present.
I suggest that the faculty of creative insight is actually an adaptation of an earlier and
more primitive faculty, one that certain individuals have learned to use in a newer way. The
insight seems immediate and startling, like the vision of a tigers stripes might startle one living in
the wild. The creative insight is really nothing more than the vision of a tiger of a different stripe.
As a general rule, we do not perceive truly new beasts of some radically new family.
Rather, we perceive tigers that have different patterns of striping, or slightly differing
configurations of body. When our perceptions jump from tiger to something new, an event that
we experience as being a creative insight, what we perceive is some new variation on the
longstanding theme of large cat.
Large cat is an enduring psychological archetype. There is good reason why
psychological archetypes are as enduring as they are. Our faculties were developed for survival,
not for intellectual understanding. For survival faculties to function properly, the being possessing
the faculties must place great confidence in them. This is because the organism must be prepared
to respond quickly based upon the information that its senses gather.
For example, consider the plight of a rabbit living in coyote country. The pattern of the
coyote must be deeply imbedded in its mind. The rabbit must be prepared to respond immediately
upon perceiving the presence of a coyote. The rabbit that stops to contemplate the meaning of
coyote while being chased by one will quickly become coyote food.
The organism must unquestioningly accept from within its mind the ingrained patterns
that allow it to evaluate particular sensory inputs. It must quickly evaluate these inputs as being
either promising or threatening, and act accordingly without delay. This is why the repatterning
activities involved in creative thought are the exception rather than the norm.
We innately assume that the existing patterns governing our thought are both valid and
valuable. Mental creativity is therefore an inherently conservative activity. Where we become
consciously aware of cognitive dissonance, we attempt to resolve the dissonance by either
substituting other patterns that are already familiar to us, or by making minimal changes in our
existing patterns. We make every effort to retain our existing patterns with as few changes as
possible.
We can note the presence of this pattern of activity in thermodynamics transition from
the caloric to the kinetic theory of heat. In response to the discovery that heat was, in fact, being
transformed to mechanical power, thermodynamicists theorized that heat and practical mechanical
power both existed as segments of a continuum of mechanical motion. Heat was merely the
random motion of molecules, whereas practical mechanical power moved larger bodies such as

34
pistons and shafts.
This transition from one paradigm to its successor was the most conservative that could
have been devised, allowing continuity with most of the older pattern of thought. The kinetic
theory incorporated qualitative transformation (of heat into mechanical power) into the older body
of theory by denying that the transformation was genuinely qualitative.
That which could not be retained as literal truth was retained by archetypal analogy.
Although the steam engine did not literally function as a new water wheel, the principles of water
power could still be used by analogy to explain the steam engine.
One other quality of the creative insight should be noted: Both the question and the
answer often emerge simultaneously as parts of the same vision. This should tell us something
about the relationship between language and fundamental mental processes. The structuring of
language into questions and answers emerged from the need to communicate with other beings,
and from the need to make decisions regarding possible courses of action.
This structuring of language did not emerge from the most basic faculty of mental
function. The most basic function of mental activity is pattern recognition. We recognize the
presence of a tiger by its stripes. Once we recognize a pattern, then we decide what to do about its
presence.
In the case of the creative insight, the problem that spontaneously solves itself is one on
which the individual has been working for some time. More often than not, the problem could
have been readily solved had the individual found the right question. In the creative vision, the
question and the answer emerge simultaneously. The question and its answer fit into a single
pattern, from within which they then tend to reinforce each other.
Immediately after the insight, the question and its answer fall into the usual pattern of
mutual reinforcement in the laboratory. The question determines the range of possible answers
that the experimenter can explore in his interpretation, one obviously being the answer from the
insight. The experimenters interpretations then tend to validate the question, producing a closed
loop that inevitably involves some degree of circular reasoning.

K. OF AGRICULTURE AND THE GODS

While nothing has equipped us to perceive the laws of nature and to immediately
appreciate their significance, we cannot rule out the possibility that we might discover such laws
by fortuitous accident. If we are wise, however, we will never refer to our perceptions and
intuitions as being natural laws. We must leave open the possibility that we may have overlooked
something, that we may be wrong in part.
Thus far, my discussion of the implications of Darwinian evolution has suggested that
these implications are generally negative with regard to human potential. There is reason for
optimism as well. Evolutionary theory considers behavior as well as genetics. With regard to
behavior, genes function as enabling agents. However, it is not necessarily true that they control
behavior. With regard to the potentials for human behavior, evolutionary theory is not rigidly
deterministic.
We cannot say that anything about our evolved beings has equipped us to perceive and
appreciate the natural laws of the universe. At the same time, however, we must acknowledge that
nothing about our genetic makeup deprives us of the right to search. Although our search has
obviously been affected by our genes, the search itself is a matter of behavior that we can attempt

35
to freely choose.
We are, after all, the first earthly specie to have constructed civilizations. What was it
about our primitive faculties that enabled us to construct civilizations?
The two great innovations that enabled primitive man to progress toward civilization
were the developments of agriculture and religion. I doubt that the modern creative insight could
have taken place prior to their development. Therefore, their development may tell us something
about the nature of the modern creative insight.
It has often been said that necessity is the mother of invention. If this is the case, then
why do creative insights take place in the bed, the bath, or bus rather than at the time and place of
necessity? The history of science suggests that creativity is more closely associated with mental
free play than with necessity. It is the absence of necessity rather than the presence of it that stirs
creativity.
The same can be said of the origin of agriculture. In nature, hungry creatures always eat
what is readily available to them. They do not prolong their hunger so that they can play with
their food. Bearing this in mind, it would be reasonable to hypothesize that agriculture did not
develop in response to any immediate need for food. Rather, agriculture developed as a hobby for
the already well fed. Only those who had already consumed their fill would perceive food as
being a toy for play. In an anthropological sense, was there some Garden of Eden? Most likely
there was.
Scientific creativity, however, involves more than merely playing with ones food. It
involves manipulating and reinterpreting symbols. Symbols are patterns that exist within our
mind, but that are always at least one step removed from their archetypal analogues in the physical
world. Consider the case of the rabbit that was being chased by the coyote. The coyote present in
the rabbits mind was definitely not symbolic. In the case of a human culture that perceived in the
coyote some divinity or cosmic significance, the coyote had become a symbol.
What distinguished the rabbit from the human culture is that the humans were not being
pursued by the coyote. It was the lack of necessity rather than necessity that allowed humans to
transform the coyote into a symbol. Symbolization only takes place when the animal being made
into a symbol poses no proximate threat.
The earliest gods were artistic representations of animals and people. The animals
chosen were the foods that man ate, the natural prey and the domesticated herd animals. The
persons were usually their ancestors or, as in Egypt, their present or former kings.
I suggest that artistic representation preceded the elevation of animals and people to
symbolic and ultimately divine status. Like agricultural production, artistic representation was
initially a product of play. This play produced objects that vaguely resembled real animals, but
that were clearly distinguishable and well removed from them. These artistic representations
could then become symbols.
Imagine what happened as prehistoric man was going to sleep. Somewhere between
wakefulness and sleep, his drawings and his clay figures took life and leaped. His clay deer
leaped through his dreams, just as the live deer leapt through his life during his waking hours.
This was the genesis of the gods. His artistic representations had come to life.
Man now lived in two worlds. One was the real world of real deer and real physical
objects. The second was the world of his imagination, the realm of symbols. This second world
offered many possibilities that did not exist in the real world.
Symbols could be mentally manipulated in the mind. The greater the degree to which
man was aware of the unreality of his symbols, the greater became his ability to manipulate them

36
at will. Doesnt this describe the nature and functioning of contemporary art? Physical theory
established its dominion within this world of unreality.
In earlier times, there was something that stood in the way of this ability to manipulate
freely. That was mans inability to distinguish between symbolism and reality. Having come to
life, the gods acquired great power over the minds of man. They became reality.
In more recent times, sciences faith in tenets that had emerged from scriptural creation
myth and the medieval world view have stood in the way of scientific creativity. In earlier times,
these tenets served science well. Science as we know it today would never have emerged had
these tenets not been used to support its emergence. However, what we possess today is a more
mature pursuit that no longer needs these earlier tenets to support its legitimacy.
What we need today is greater flexibility in our abilities to manipulate the concepts and
symbols that inhabit our mind. Under the world view of medieval Christianity, pride and hubris
were the two most serious sins. It was hubris that had resulted in the fall of Satan from divine
grace. Under the world view of evolutionism, the ultimate sin is inflexibility.

L. HISTORIOGRAPHY OF THE SCIENTIFIC REVOLUTION

Was the Scientific Revolution a one-time event on which we now build? Or is science a
process of intellectual challenge that requires each generation to reinvent science for itself, and to
reconsider all earlier doctrines? Must each generation reinvent the wheel for itself, or at least
reconsider its understanding of the wheel?
During any period in scientific history, each discipline is dominated by concepts that
provide the discipline with its central vision, and that govern its pursuits. Historian of science
Thomas S. Kuhn called these central concepts paradigms. They are often called models.
As Kuhn argued, the vast majority of those active in any scientific discipline in any
generation must be committed to working within the paradigms that dominate their times. In
discussing normative science, he noted the following:

The areas investigated by normal science are, of course, miniscule; the enterprise now
under discussion has drastically restricted vision. But those restrictions, born of
confidence in a paradigm, turn out to be essential for the development of science. By
focusing attention upon a small range of relatively esoteric problems, the paradigm forces
scientists to investigate some part of nature in a detail and depth that would otherwise be
unimaginable. during the period when the paradigm is successful, the profession will
have solved problems that its members could scarcely have imagined and would never
have undertaken without commitment to the paradigm. At least part of that achievement
always proves to be permanent
21


Simultaneously, however, there must be a second group of persons actively pursuing
scientific thought. These individuals take it upon themselves to challenge the paradigms
employed by the normative scientists, while at the same time acknowledging their dependence
upon the work of the normative scientists. Normal science provides these individuals with food to
digest, but also with chronic cases of intellectual indigestion.
This second and much smaller group consists of those who fancy themselves to be
creative theorists. These are the individuals of every generation who must reinvent scientific

37
disciplines for themselves. Their goal is to seek out particular difficulties or omissions that exist
within the framework of the accepted paradigm, and to use these difficulties or omissions as their
starting points from which to seek new paradigms.
There are accepted credentials for those who seek to become a part of normal science.
There is regular employment within their disciplines. Neither credentials nor employment exist
for those who seek to follow the second path. Of the many thousands who try, perhaps one will
succeed. The others will fade unnoticed into history.
These seekers are like the pioneers of the biosphere. The pioneering individuals of the
biological world are those individuals that intrude into new environments, that develop new
patterns of behavior, or that learn to exploit new sources of food. Most of the adaptive evolution
within the biological world takes place because a few such individuals succeed.
Natural selection works in tandem with pioneering behavior. Those who succeed as
pioneers are frequently favored by natural selection because their new environments or behaviors
offer greater possibilities for reproductive success. This means that natural selection is not based
entirely upon the physical characteristics of those favored by natural selection, but also upon their
chosen behaviors.
Sometimes the new behavior is forced upon the pioneering individual. In other instances,
the new behavior is merely opportunistic. The existence of opportunistic pioneering in the
biological world suggests that theorists prior to Charles Darwin, his grandfather Erasmus Darwin
included, had not been entirely wrong. They had theorized that species evolved because they had
willed themselves to involve. Opportunistic pioneering does involve some degree of will.
For both the normative scientists and the creative pioneers, the starting point is the
science textbook. However, they then use this textbook in two very different ways. As Kuhn
noted about normal science:

When the individual scientist can take a paradigm for granted, he need no longer, in his
major works, attempt to build his field anew, starting from the first principles and
justifying the use of each concept introduced. That can be left to the writer of textbooks.
Given a textbook, however, the creative scientist can begin his research where it leaves
off and thus concentrate exclusively upon the subtlest and most esoteric aspects of the
natural phenomena that concern his group.
22


Normative scientists and creative pioneers view the textbook in two very different ways.
For the normative scientist, the doctrines presented in the textbook become the basis of his
creative endeavors. He functions within their constraints. For the creative pioneer, however, the
textbook contains claims that need to be challenged rather than accepted. One challenges the
authority of the textbook rather than building upon its explanations.
This is why few who become creative pioneers will complete academic programs in their
adopted scientific disciplines. They do not fully accept the discipline imposed by those paradigms
that the discipline has adopted as its own. Consequently, the programs that creative pioneers
pursue will be the ones that they have laid out for themselves. Nearly all such programs will
prove fruitless. Very rarely, however, there is an exception to this fruitlessness.
The creative pioneer seeks to engage in precisely that kind of activity that scientists
engaged in normal science nearly always avoid. As Kuhn noted:

The scientific enterprise as a whole does from time to time prove useful, open up new

38
territory, display order, and test long-accepted belief. Nevertheless, the individual
engaged on a normal research problem is almost never doing any one of these things.
Once engaged, his motivation is of a rather different sort. What then challenges him is
the conviction that, if only he is skilful enough, he will succeed in solving a puzzle that
no one before has solved or solved so well.
23


In such a context, a paradigm that has been accepted as a natural law is something that
the normative scientist simply accepts. It has found its way into the textbook for good reason.
Although the textbook may chronicle paradigmatic shifts that have taken place in the past, the
textbook itself does not encourage speculation that some further shift will undermine the concepts
that it presents.
Scientific textbooks employ a peculiar sense of authority. On the one hand, the scientific
disciplines validate their authority by pointing out the various experiments by which their
paradigms have been tested. The authority of the modern sciences is therefore greater than that of
the earlier traditional authorities such as Aristotle.
On the other hand, however, the scientific method contains implicit within it the method
by which the authority of accepted paradigms can be challenged. Few if any earlier authorities
incorporated such mechanisms in their doctrines. Scientific textbooks point out that this
mechanism exists, but downplay the possibility that this mechanism will be employed to
successfully overturn the paradigms that they teach.
As previously noted, natural law as we understand it today is not what natural law was
three centuries ago. We continue to use the term natural law, even though what we mean by it
has changed. What we call laws today are not laws at all in the classical sense of divine, natural
law as understood in the seventeenth century.
Our use of an older term whose meaning is now archaic tells us something about our
attitude toward disciplinary continuity within the physical sciences. We like to believe that more
continuity has existed than what has actually been the case. Kuhn clearly recognized this fact. In
his writings, he acknowledged that there never was a single Scientific Revolution. Rather, there
have been a series of revolutionary paradigm shifts within the various disciplines.
Furthermore, a paradigm within one discipline may mean something quite different than
a paradigm by the same name in a different discipline. Although the disciplines may share what
appears to be a common paradigm, they may use that paradigm in very different ways. Their uses
of the paradigm may diverge in different directions, implying conclusions that ultimately
contradict each other.
Nonetheless, we like to think of paradigms as being like building blocks. Science as a
single endeavor will build an edifice, block by intellectual block, the previously placed blocks
providing a secure structure on which to build. Our structure will rise toward the heavens, a
modern analogue to the medieval cathedral, a monument both to Gods creation and to mans
understanding of it.
Faith that such a cathedral can be built is characteristic of normal science, at least in the
popular mind. This is the faith that allows and encourages scientists to function within the
constraints of accepted paradigms, hoping to add their mortar to the structure. In fact, as Kuhn
was well aware, science as a doctrinal structure is an edifice that we keep tearing apart as we
attempt to build it. Today, we have little idea what the finished edifice would look like
assuming that human intellect might ever be able to complete it.


39
M. THE FUNCTION OF HISTORIOGRAPHY

Like the writing of creation myths, the writing of historiographies was a longstanding
cultural phenomenon that the scientific movement expropriated for its own purposes. Much of the
Old Testament consists of historiographic writings compiled from a number of sources whose
institutional claims frequently contradicted each other.
Historiographic writings were hardly objective histories. Rather, they were histories
written for particular purposes to support the societal roles and prerogatives of the various
institutions within the culture. In Old Testament histories, one historian might write in support of
the Davidic monarchy (Kings), another the priesthood (Chronicles), still another the tradition of
prophesy (parts of various books). One might write in favor of an exclusive, insular J ewish
culture (Ezra), another in support of its international inclusiveness (Ruth). What one learns from
history depends upon whose history one reads, and on what the particular author intended to teach.
Scientific textbooks have clearly employed their own historiography, one obviously quite
favorable toward the scientific movement. Textbooks paint a picture of continued progress toward
the construction of the edifice of doctrine, the true cathedral of the modern age. Seldom
mentioned are the many digressions into fruitless speculations, and the many times that earlier
interpretations of experimental results were later thrown out as invalid.
Prior to Kuhn, scientific historiography generally followed much the same pattern as had
the textbooks. It was only reasonable that those who had learned from the textbooks would write
history in much the same format as had the textbooks from which they had learned. After all,
textbooks were themselves structured as historical accounts.
At first glance, it may not appear that scientific textbooks should properly be called
histories. This is because these textbooks were the first historical accounts that invited their
readers to employ the experimental method. Unlike earlier historical accounts, scientific
textbooks allow their students to demonstrate for themselves the validity of the historical accounts
they present. Students move from the classroom to the instructional laboratory to verify
paradigmatic truths for themselves, and to build their own faiths in the experimental method.
As usually employed in scientific education, the experimental method as an instructional
device seriously shortchanges the real importance of history. This is because the laboratory is
demonstrative, whereas the most important events in scientific history have been interpretive
rather than demonstrative.
Textbooks teach the paradigms of the various disciplines. These paradigms themselves
are interpretive, not demonstrative. There is much about the paradigms that the textbooks fail to
teach. What were the many cultural and linguistic factors that encouraged those interpretations?
The paradigm of energy was based upon interpretation, not experimentation, as were nearly all of
the other major paradigms. Textbooks have generally substituted laboratory demonstration for
thorough discussions of such historical background.
When history is discussed, these discussions mention little of the broader historical
context of the times. There is little mention of the prevailing technologies or of the industrial
economics of the time. Rarely is there any mention of the influence of contemporary religious
doctrine on particular theorists. Isaac Newton, for example, had been trained as a cleric rather
than a scientist. He exhaustively read the church fathers, and attempted to mathematically
decipher the book of Revelations. His religious beliefs seriously limited his ability to consider
certain theoretical possibilities.

40
Like most educational projects, the function of scientific education has been as much to
persuade as to inform. Of course, this isnt how the educational mandate has been expressed. The
mandate assumes that those who are informed will obviously be persuaded. To be less than
persuaded is to be less than educated.
Although the religious implications of Darwinian evolution had been debated as earlier as
a century earlier, it wasnt until the publication of T.H. Kuhns historical work, The Structure of
Scientific Revolutions, first published in 1962, that the obvious implications of Darwinian
evolution began being felt in scientific historiography.
Early after its inception, Darwinian evolution was perceived as posing a direct challenge
to traditional church doctrine regarding creation. Darwinian evolution was perceived as
challenging traditional religious doctrine, but not the scientific epistemology that had been based
upon such doctrine. A century after Darwin, it wasnt Darwinian evolution that provoked the
change in scientific historiography, but rather the careful study of scientific history itself.
Kuhn was no evolutionary ideologue. Rather, he was a conscientious historian of science
whose goal was to make the writing of scientific history correspond with the realities of scientific
discovery. Kuhn theorized a series of smaller, disruptive paradigm shifts within the various
disciplines, rather than a single revolution that steadily progressed by building upon itself.
Although as a historian rather than an ideologue or philosopher he made no effort to
emphasize the fact, the emerging understanding of scientific doctrine and the processes of
discovery closely approximated the understanding that one could expect to emerge from the theory
of biological evolution.
The importance of this new understanding is that it places much greater emphasis upon
the importance of cognitive, interpretive processes. The biblical creation myths that had
previously guided the physical sciences had encouraged placing a much greater emphasis upon
demonstration, assuming that valid interpretation would not be overly difficult. By contrast,
Darwinian evolution has left us with little upon which to base confidence in our interpretive
capabilities.
Darwinian evolution was not merely a new paradigm that found its way into biological
textbooks. It was a new creation myth that should have forced us to reconsider the nature of all
textbooks in the physical sciences.

N. THE CHALLENGE

The most important question involving the steam engine concerns the principle of
qualitative transformation. How are photons transformed into mechanical power, and mechanical
power back into photons? This is the question that the kinetic theory of heat attempted to avoid,
and that thermodynamics has not answered.
In the following chapter, I argue that the answer lies in reinterpreting Einsteinian
relativity. I did not begin my pursuit with the intent that I should challenge Einstein. Instead, my
intuitive insights placed me on a collision course with Einstein. If one of us was to be correct,
then the other must be wrong. I was not the first theoretical thinker to have encountered such a
situation.
When considering the transition of energy from one inertial perspective to another, the
basic question underlying relativity, Einstein never asked the question concerning qualitative
transformation. I combine the question of qualitative transformation with that of the transition of

41
photons from one inertial perspective to another.
The theoretical structure that emerges is startling, although not as novel as it may seem at
first. Consistent with my observations regarding the creative process, all components of this new
theoretical structure are old and familiar. I have merely rearranged them.
We are most comfortable with concepts that have long been familiar to us. Familiarity
naturally breeds acceptance and belief. The theoretical structure I am proposing is unfamiliar and
is therefore certain to provoke strong doubt. When confronted with the new and unfamiliar, the
usual instinct is to circle the wagons around the existing theoretical structure. Circling the wagons
assumes that the existing structure must have survived as long as it has for some good reason.
It is psychologically much more difficult to ask a simple question: Does either the
presently accepted theory or the newly proposed theory have greater credibility, one over the
other? Or are they, in a conceptual sense, of equal credibility? Where two competing sets of
conceptual statements exist, is it reasonable to pursue one, while completely ignoring the other?
Physical theory makes great use of the mathematical structure and discipline of geometry.
Geometry is based on a handful of fundamental theorems that cannot themselves be proven, but
that appear so self-evident as to be undeniable. Physical theory has a similar set of statements, its
basic assumptions. Like the fundamental theorems of geometry, the basic assumptions of the
physical sciences appear to be self-evident but cannot be proven. However, these basic
assumptions may not be as self-evident as they appear to be. In fact, they may be wrong.
I am proposing two statements as a new foundation for physical theory. Like the field
theory and relativity of today, the purpose of these statements is to provide a foundation from
which basic physical relationships can be predicted and their formulas derived. Like the basic
theorems of geometry, these new statements themselves cant be proven, at least not directly.
Unlike the basic theorems of geometry, there is nothing intrinsically familiar and believable about
these new statements. At first, they might seem quite strange. To the extent that proof is possible,
the proof will be in the pudding. What will these statements allow us to predict about fundamental
physical relationships?
The task of the creative theorist has much in common with that of the written prophets of
the Old Testament. In both cases, the individual is working within the context of older concepts
that he is attempting to reinterpret or reprioritize to fit the contemporary situation. The themes
with which he works are the accepted themes of the culture.

42
CHAPTER THREE:
DOES FIELD ASSIGN QUALITIES TO
OTHERWISE EMPTY SPACE?

Ever since Newtons discovery of the inverse squared law of gravity in the seventeenth
century, we have defined field as assigning qualities to points in space. These points were not
believed to be truly empty until the twentieth century. Earlier, it was believed that these points
were occupied by aether, a kind of all-pervasive matter that we could not sense.
In these earlier times, it was believed that no truly empty space could exist. Medieval
philosophy abhorred the very possibility of a spatial void, and denied that any true void could
possibly exist. All apparently empty spaces must be filled with some kind of matter. In the
absence of any matter that we could sense, the space must be filled with an aether, a kind of matter
that we could not sense. Medieval philosophy dictated that a particular kind of physical reality
must exist, regardless of whether or not we could sense it. Finding it useful to do so, physical
theory later expropriated this philosophical statement for its own purposes.
We like to think that science procures its models by looking at the physical universe, that
which can be detected and recorded by experimental apparatus. This is not true. Scientists, like
all other citizens of their cultures, are ultimately products of their technological and cultural
environments. Anything within that environment, technological or cultural, can be expropriated
for use as a scientific model. That which is expropriated can come from theology, literature, or
even a kitchen drawer. Medieval aether was one such expropriation.
In contrast to the earth, the cesspool of the universe, the medieval aether was the pure air
that God breathed, the air in which angels flapped their wings. One might ask the obvious
question: Why was it necessary for God to breathe an air that was qualitatively different than the
air we mere humans breathed? True, the air that medieval humans breathed was often tainted with
the smells of agricultural realities, or with the plentifully rich odors of urban life. However,
neither air pollution nor foul odor was the reason.
The real reason was that the earthly air within which humans lived contained components
of cosmological contamination, and was therefore unsuitable for God. This belief was rooted in
the pastiche of medieval creation myth. The fall of man was a central component of this myth.
The fall of man had contaminated not only us humans as a specie, but also the earthly realm in
which we lived. As a result of the fall of man, there now existed two separate realms. One was
the pure realm within which God and the heavenly hosts lived. The other was the contaminated,
earthly realm within which we human lived. Because we lived in that contaminated realm, the air
that we breathe contained the cosmological contamination that humans, because of the fall of man,
had conferred upon it. (Talk about one apple spoiling the barrel!) The air that God breathed did
not contain such contamination. It was pure. Quoting from an encyclopaedist of the fourteenth
century:

This air shineth night and day of resplendour perpetual and is so clear and shining that if
a man were abiding in that part [heaven] he should see all, one thing and another and all
that is, fro from one end to the other.
24



43
During late medieval times, no philosopher or poet believed that we mere mortals would
be blessed to breathe the air that was reserved for God and his heavenly host. The heavenly air,
aether, and the earthly air that we breathed existed in two separate realms. Physical theorists,
however, began to believe otherwise. Why couldnt this heavenly air, previously reserved for God
and the heavenly hosts, also be a part of our immediate world the world in which we live?
Belief in a pure, heavenly air merged with medieval belief than no spatial void could possibly
exist. The result was aether, an insensible (outside the range of human senses) kind of matter that
existed as part of our world, and that filled any space that we might perceive to be empty.
If all spaces must be filled with some form of matter, even though we may not be able to
sense its presence, then this insensible, aethereal matter could reasonably be assigned particular
functions by physical theorists. An earlier subject for philosophical speculation and for theology,
aether later became the stuff of physical theory. Absent this much earlier philosophical and
theological speculation, it is doubtful that later physical theorists would have hypothesized the
existence of some form of insensible matter. This much earlier pattern of thought, the belief that
an insensible matter could exist, sanctioned what emerged as the only viable pre-Einsteinian
explanation of light and electromagnetic fields.
Aether was injected into theory so that phenomena could be explained using mechanical
principles. It was apparent that we could not explain these phenomena in a mechanical way using
the matter that we can sense. Waves spreading through air had already been explained as sound.
Mechanical explanation of additional phenomena, such as light, was possible only if we
hypothesized a second kind of matter that coexisted with the matter that we could sense, a second
kind of matter that we could not sense. Medieval philosophy and theology came to our rescue.
Aether had existed centuries before it was incorporated into physical theory.
No human had ever breathed aethereal air, the air of God. As aether became the stuff of
physical theory, it became a substance that was not confined to the heavenly realm. Instead, it
became a kind of matter that existed everywhere, interpenetrating all other matter. There could be
no space that was devoid of aether. The air of God became the cosmological rubber band of
physics.
Aether provided J ames Clerk Maxwell with the conceptual basis from which he
developed his mathematical model of field, the electromagnetic field in particular. As Maxwell
described the relationship of aether to electromagnetic fields in the late nineteenth century:

It appears therefore that certain phenomena in electricity and magnetism lead to the
same conclusion as those of optics, namely, that there is an aethereal medium pervading
all bodies, and modified only in degree by their presence; that the parts of this medium
are capable of being set in motion by electric currents and magnets; that this motion is
communicated from one part of the medium to another by forces arising from the
connections of those parts; that under the action of these forces there is a certain yielding
depending on the elasticity of these connections; and that therefore energy in two
different forms may exist in the medium, the one form being the actual energy of motion
of its parts, and the other being the potential energy stored in the connections, by virtue of
their elasticity.
25


In Maxwells theory, aether was like an extremely high tension, three dimensional rubber
strip. The rubber strip was thin and narrow toward the center of the field, and widened and
thickened as one moved away from the center. As with any normal rubber band, the tension

44
within the material was greatest at its narrowest point. In the case of the electromagnetic field,
Maxwell hypothesized that the strongest areas of electromagnetic fields, those nearest the center,
experienced the highest degree of tension within the aether.
Ever break a rubber band and notice where it broke? It broke at the point of highest
tension. This was the point at which the rubber band had been damaged. Damage had reduced the
cross sectional area of the rubber band at this point. As the rubber band was stretched, the rubber
at this point became more highly stressed than at other points. The stress per unit of cross
sectional area is greatest where the cross sectional area is the smallest. This is where the rubber
band breaks.
This was the kind of thinking that Maxwell employed in developing his aethereal theory
of the electromagnetic field. He had no direct experience with the aether itself. This he clearly
acknowledged. He could only infer the existence of aether itself from the experiencing of light
and electromagnetic fields.
It wasnt the existence of electromagnetic fields that necessarily dictated the existence of
aether. Rather, it was the nature of Maxwells explanation that dictated that an aethereal medium
must exist. Maxwells explanation was based upon mechanical thought. Within mechanical
thought, mechanical wave motion can only exist if a material medium is present. It is the material
medium that transmits the mechanical wave. Maxwell hypothesized that light was transmitted as a
mechanical wave through a material medium, the aether.
If the material medium must be present so as to transmit a mechanical wave, then internal
qualities could also be assigned to the medium. In practical technology, this is an everyday
practice. This is what materials technology is all about. Maxwell hypothesized that the internal
qualities of the aether were electromagnetic fields.
As has happened so often in physical theory, Maxwell used analogy as his basis for
thought. His direct experience with the principle that he then employed was probably limited, as
ours is, to the stretching and breaking of a rubber band. Maxwells actual experimental apparatus
was most likely found in an office or kitchen drawer.
Unlike our ordinary rubber band, however, Maxwells theoretical rubber band was
cosmological. One always lived within this cosmological rubber band. It was omnipresent. One
could not escape it. It existed everywhere, occupying all space.
One should remember that Maxwell was a mathematician, not an experimental scientist.
Mathematics prides itself on its ability to formulate and quantify just about anything. The
existence of mathematical formulation is not predicated upon the existence of some physical
analog. Historically speaking, however, mathematical formulation has generally led to the belief
that there must be some physical phenomenon to which the formula applies.
We have not departed from the late medieval doctrine of plenitude to as great a degree as
we like to think. Plenitude led to belief that one could accurately hypothesize and describe much
that one has never seen or experienced. Plenitude claimed that God, lacking envy, had created
everything that it was possible for him to have created. If we can imagine it, then God probably
created it. This pattern of thought carries forth easily into physical theory. Once a formula has
been developed, we seek uses for it. We have found physical reality using little more than pencil
and paper, or so we like to think. Later in this book, I am guilty of this practice.
The degree of tension at any given point gave the aether its quality of field, the
electromagnetic field in particular. The greater the tension, the stronger the field at that point.
The elasticity of the mediums internal connections gave it the ability to transmit wave motion,
just as one can strum a stretched rubber band and cause it to resonate. Strum a rubber band at one

45
point. That action will be transmitted as a wave from end-to-end along the stretched band. Thus,
the qualities of the electromagnetic field and the transmission of light could be united into a single
theory. Today we recognize this theoretical unity to have been highly speculative. As this chapter
later explains, the theory that replaced the aethereal hypothesis and that is accepted today is
equally speculative if not even more speculative.
Light was like sound. The two differed in that sound was propagated through ordinary
air, whereas light was propagated through the interpenetrating aether. The only real difference
was that light traveled 60,000 times faster than sound, a minor difference indeed.
As Albert Abraham Michelson noted in 1907, this aethereal medium could not be like
any ordinary form of matter. Given lights extraordinary velocity, aether would have to be either
extremely elastic or extremely rare in the sense that it possessed an extremely light mass, or both.
Putting a pencil to the problem, he calculated that aether would either have to be 3.6 billion times
more elastic than steel, or 3.6 billion times less massive. Such a mass per unit of volume would
correspond to a density 56,000 times less than that of gaseous hydrogen at atmospheric pressure.
26

With regard to later efforts to unify gravitation and electromagnetism into a single theory,
Maxwells hypothesis posed one obvious problem: The aethereal medium could only be used to
explain one or the other. It was possible to unify electromagnetic fields and radiation (light) into a
single theory, but it was not possible to also include gravitation. This was similar to the problem
that had already been encountered with ordinary air. Mechanical wave motion through ordinary
air could not be used to explain both sound and light.
Einsteins work was the first to systematically discard faith in the existence of an
insensible aether, a hypothetical medium whose existence we cannot sense. Einstein proposed the
premise that field could be experienced in a void, a space entirely empty of matter. Although
discarding faith in the actual existence of aether, he retained faith in the most fundamental quality
that had previously been attributed to aether: Field assigned qualities to points in some
geometrically defined space. Bodies occupying those points experienced the qualities of those
points.
Indeed, this is why field has acquired and retained its name field. Field in physics is
like a field in baseball or farming. It is a multidimensional context within which particular events
take place. In physics, the qualities of the given field at any given point within it will influence
events that take place at that point. We can define point (x,y,z) just as readily as we can define the
location of third base on a baseball field, or a stalk of corn in some agricultural field. Like a field
in baseball or farming, field in physics exists at any given point quite independently of any body
or being that might experience it. In an agricultural field, a small area of dirt will exist regardless
of whether a stalk of corn grows there or not.
Bodies are assumed to experience fields because they experience the qualities of the
points that they occupy. These qualities are present at any given point, regardless of whether any
body occupies that point. Even if the point is entirely vacated, the qualities of that point remain.
Note the correlation between the mathematics of the three dimensional coordinate system
and the hypothesized structure of field. Any point within a three dimensional coordinate system
can be defined with reference to the three axes of the system, x, y, and z. According to field
theory, each such defined point within the coordinate system can be assigned particular field
values. These values are a combination of several factors. First, is the field gravitational or
electromagnetic or both? Gravitational fields can exist without accompanying electromagnetic
fields, but electromagnetic fields cannot exist without accompanying gravitational fields. If both,
then separate values much exist for each.

46
Second, what is the respective strength of each? There is no necessary ratio that defines
the relative strengths of the two kinds of field. One point can be strongly gravitational, another
strongly electromagnetic. The third question applies only to electromagnetic fields. Gravitational
fields are always attractive. Electromagnetic fields are not. When dealing with electromagnetism,
opposite charges attract, like charges repel. What is the charge of the particular field, positive or
negative?
Field theory placed great confidence in the use of the three dimensional coordinate
system as a mathematical model, assuming that it had a correlating analog in the physical world.
Such faith has been characteristic of physical theory. We assume that physical reality directly
parallels the structure of our mathematics.
Einstein fully embraced the spatial concept of field that had been employed earlier by
advocates of aether. Following in their footsteps, he insisted that field must assign qualities to
empty points in space. This made it possible for Einstein to carry forward theoretical equations
that had been based upon the existence of aether. These equations had been based upon a spatial
concept of field. Einstein reinterpreted these equations. He did not discard them.
Einsteinian thought was based upon continuity with a previous pattern of thought,
accompanied by a partial rejection of that pattern of thought. Continuity involved his continued
acceptance of the practice of assigning field values to various points within a three dimensional
coordinate system, regardless of whether those values were actually being experienced at any
given time. His partial rejection involved his rejection of the aethereal hypothesis. It was no
longer necessary to argue that each such point must be occupied by some form of matter. Each
such point could be devoid of matter, but nonetheless possess vectoral (numerical [scalar] and
directional) values that we experience as field.
Despite having discarded faith in aether, Einstein nonetheless retained Maxwells field
equations that had been based upon the elasticity of the aether. Yes, one could still find rubber
bands in an office drawer. Instead of assigning the qualities of field to particular points in the
aethereal medium, as Maxwell had done, Einstein assigned these qualities to otherwise empty
points in space. Since Einstein, we have considered it self-evident that space must be qualitatively
relevant because the concept of field must assign particular qualities to particular points in space.
I argue that this statement poses serious epistemological difficulties and should therefore
be considered for discard. This will obviously involve radically redefining our concept of field.
Field can no longer be field as we understand it today. Field must be an experience based upon an
interactive relationship, not some phenomenon that exists independently of some interactive
relationship. At any given point, field can be said to exist only when and because we experience it
as existing. We cannot say that field exists at that point when we are not experiencing it.

A. THE NEED TO CONCEPTUALLY UNIFY THEORY

The inverse squared law has long been attributed to field, both gravitational and
electromagnetic, as being its most fundamental and universal quality. This was known as the law
of gravity prior to the discovery of electromagnetic fields. Newton first derived it. After
electromagnetic fields were discovered, it was discovered that this same law applied to both.
Beginning with Einstein, and to some degree with Faraday before him, theorists began
trying to unify gravitational and electromagnetic fields into a single theory. Einstein called it
unified field theory. Einstein is generally credited with having failed in his quest. Since then,

47
two additional kinds of field have been hypothesized, the weak and strong nuclear, one of
gravitational nature and the other of electromagnetic nature. These additional fields were
hypothesized to exist because both the masses and charges of nuclei exceed the sums of the
masses and charges of the particles that comprise the nuclei. If two plus two equals 4.1, then how
does one explain the additional 0.1? Today, the quest for field unification involving all four fields
is known as the quest for grand unification.
If gravitational and electromagnetic fields are to be unified into a single theory, then they
should share a common derivation for the inverse squared law. This is, after all, the most
fundamental mathematically definable quality that they share. Obviously, the most fundamental
quality is their ability to engage in attractive relationships. Thus far, it has not been possible to
find a common derivation.
Einstein used general relativity to derive the law of gravity. According to general
relativity, the gravitational field present at each point with the field alters the values of the units of
time and distance at that point. As one moves from one point to another, experiencing a change in
the strength of the gravitational field as a result of the move, one also experiences changes in the
values of the units of time and distance. This alteration in the values of the units of time and
distance explains the change in the strength of the gravitational field.
Maxwell used the aethereal hypothesis both to derive the law of electromagnetic field
strength and to explain electromagnetic radiation. Einstein used general relativity to derive the
law of gravitational field strength. Neither could apply their derivations to both gravitation and
electromagnetism.
The problem with the existence of two kinds of field is that both coexist in the same
spaces, but at differing ratios from point to point. This was part of the problem that had earlier
sanctioned faith in the existence of an aethereal medium, a medium that could be used to explain
electromagnetic fields. Tensions within the aether could be used to explain electromagnetic
phenomena that clearly violated the law of gravity even though the same inverse squared law
applied to both gravity and electromagnetism. As an example, consider an electromagnet that lifts
a piece of iron from a tabletop. Electromagnetism is being used to defy gravity. In this situation,
electromagnetism is experienced as being stronger than gravity, and is acting in an opposing
direction. However, the inverse squared law nonetheless applies to both the earths gravity and
the electromagnets magnetic attraction.
Later, Einstein conceptually eliminated aether. This elimination did not solve the
problem of differing ratios, one of the two problems (the other being the movement of light) that
had earlier induced scientists to hypothesize the existence of aether.
Because of this problem of differing ratios, one could not simultaneously derive the
inverse squared law for both kinds of field from the same statement concerning the qualities of
points within the fields. One could assume spatial qualities, either aethereal or of general
relativity, and thereby derive one, either gravitational or electromagnetic, but not the other.
In any effort to find a common derivation, the aethereal hypothesis fails because the
aether can only possess one degree of elastic tension at any given point within it. General
relativity fails because the units of time and distance can only be altered once for each point in
space. Because the ratios of gravitational to electromagnetic field strength vary from point to
point, contemporary theory must assume that either one kind of field or the other assigns its values
to the units of time and distance. It is not possible for both to assign values.
This dilemma suggests that it will be possible to conceptually unify gravitation and
electromagnetism only if we discard belief that field exists independently of our experience of it.

48
Achieving conceptual unity will not be possible for as long as we continue to assume that field
assigns qualities to otherwise empty points in space. We have progressed from believing in some
insensible matter to which we can ascribe particular physical phenomena, to believing that we can
ascribe these phenomena to empty points in space. How much progress have we made?
To paraphrase Shakespeare, the problem that we have encountered in our attempts to
achieve common derivation may lie within ourselves, not somewhere out in the stars. Quoting
historian of science Shmuel Sambursky:

In the past forty years [nearly 75 now] since the last great theoretical breakthrough, an
enormous wealth of new experimental data has accumulated in elementary particle
physics, astrophysics, and cosmology. Theory has been lamentably lagging behind
experiment in recent decades. Many eminent physicists of the late generation, notably
Bohr and Pauli, have expressed the opinion that quantum mechanics may well be the first
of many more steps that will lead physics away from familiar classical concepts. A major
theoretical advance could possibly be achieved only through ideas involving further
radical renunciations of some notions to which physicists have become accustomed in the
age of determinism [emphasis added]
27


Today, particle physics is attempting to move closer and closer to an understanding of the
origin of the universe in the Big Bang. Closer and closer seconds, milliseconds, microseconds,
nanoseconds. The closer we get in time, the closer we approach that absolute symmetry that
theorists believe existed very early in the Big Bang. The numbers and the energies become so
great as to be unfathomable.
Yet despite our efforts to understand the Big Bang, many of the questions most relevant
to our simplest everyday technologies remain unanswered. We have not been able to find a
common derivation for the inverse squared laws of gravitational and electromagnetic field
strength. We have not found a truly simple explanation to explain how the steam engine
transforms radiant heat into force.
We have not found a simple principle to explain exactly how and why an electron emits a
proton. We have no simple way to explain why a photon will tend to impact an electron, a tiny
particle in a comparatively vast space. We have no simple explanations for the movement of light
through a refractive medium. We still have difficulty explaining exactly what electricity is.
Following the path taken by Einstein and later advocated by Bohr and Pauli, we have
challenged the constancy of the values of the units of time and distance, and the distinction
between time and space. And yet we have still to find our answers.
I suggest that Sambursky was correct. There is still some portion of an old world view
that we have yet to challenge, something old and allegedly self-evident that stands in our way. I
argue that we have yet to entirely discard the final remnants of the aethereal hypothesis.
The aethereal hypothesis is alive and well, residing just under the surface in Maxwells
field equations and in Einsteinian relativity. Once we completely rid ourselves of residual aether,
we will be able to restore constancy to the values of the units of time and distance, making
analytical thought easier. We will then be able to solve all of the everyday problems listed above,
and others as well.
It is time that we cleaned our intellectual closet.

49
B. HISTORICAL BACKGROUND

Newton is credited with having first discovered the inverse squared law of gravitation.
Despite his best efforts, he was never able to explain why gravitation worked. He was unable to
derive it from any theorem that he believed to be more fundamental than gravitation itself.
Consequently, the inverse squared law assumed the status of a basic theorem itself, a basic
theorem of physical reality. Newton himself doubted that this was the case.
As Newton described his doubts in the preface to the first edition of his famed
Philosophiae Naturalis Principia Mathematica (1687):

I derive from the celestial phenomena the forces of gravity with which bodies tend to
the sun and the several planets. Then from these forces, by other propositions which are
also mathematical, I deduce the motions of the planets, the comets, the moon, and the sea.
I wish we could derive the rest of the phenomena of Nature by the same kind of
reasoning from mechanical principles, for I am induced by many reasons to suspect that
they may all depend upon certain forces by which the particles of bodies, by some causes
hitherto unknown, are either mutually impelled towards each other, and cohere in
regular figures, or are repelled and recede from one another. [emphasis added] These
forces being unknown, philosophers have hitherto attempted the search of Nature in vain;
but I hope the principles here laid down will afford some light either to this or some truer
method of philosophy..
28


Never has a more accurately prescient statement been written! I argue that a single
assumption stood between Newton and the achievement of his ultimate goal. Had Newton
reversed this one assumption and then systematically considered the implications of its reversal,
he might well have achieved much of what I hope to achieve in this book.
Newton argued that each point in space is definable in absolute terms and possesses some
absolute location. Absolute space, its own nature and without any relation to anything external,
remains always similar and immovable.
29
Note the nature of Newtons argument. Newton argued
that God not only created the physical universe, but also some absolute three dimensional
coordinate system. This geometric system existed quite independently of the universe that God
had superimposed upon it. Theology thereby sanctioned unquestioning use of the three
dimensional coordinate system in physical theory.
The basis of Newtons argument was theological, not scientific, but nonetheless became
basic to physical theory for the next two centuries. Newton, having been educated at Trinity
College as a cleric rather than as a scientist, argued that spatial relativism would deprive God of
his role as the creator. This is the same argument employed today by Creationist opponents of
biological evolution.
God the creator created each point, and assigned each point to its location. God was not
merely the creator of matter and its forms, but also of the mathematical context within which
physical reality existed. All points in space occupy the points that God assigned to them. Because
points in space had absolute, God-given values, then motion could also be defined as being
absolute in its nature. There was something divine about the mathematical structure of the
universe, independent of the matter that God had created and placed within it.

50
By contrast, Descartes argued that all points are merely relative. Their locations can only
be defined relative to other points, whose locations in turn can only be defined as relative.
Because all points can only be defined as relative, then all motion must also be defined as being
only relative. In this, he anticipated the late nineteenth century work of Ernst Mach, from whom
Einstein drew much of his inspiration for relativity.
Newtons response to Descartes relativism was hardly scientific in a modern sense.
Instead, it closely resembled the papal rejection of Galileos advocacy of the Copernican solar
system: Newton accused Descartes of atheism.
30
Newtons advocacy of the absolute nature of
both space and motion then dominated physical theory until the time of Einstein.
Ironically enough, the papal rejection of Copernicus and Galileo decades before Newton
has long been cited as evidence for the existence of a Scientific Revolution. Newton was every bit
as theological as the pope! To the degree that there was a dispute between the pope and the
scientists, this dispute only concerned papal authority. The authority of scripture and theology
was not disputed. The Scientific Revolution argued that certain matters of enquiry should be
ceded by religious authorities to the newly emerging experimental sciences.
If we were to describe Newton in contemporary terms, we would describe him as a
staunch Creationist, definitely not an evolutionist. For Newton, truth began and ended in the
scriptures. We could observe and hypothesize, but the ultimate guide for our observations and
hypotheses was the written revelation of God as contained in the scriptures and as further
elaborated upon within medieval culture. Newtons Creationism dominated physical theory until
the age of Mach and Einstein. Theological Creationism continued to dominate physical theory
until well after Darwin had challenged it in biology.
However, one cannot argue that Descartes relativism was internally consistent by
todays standards. Descartes, too, had been highly influenced by medieval thought. Hence, he
was also incapable of true relativism. True relativism can exist only where truly empty spaces are
allowed to exist. Consistent with most medieval philosophers, Descartes argued that empty space
could not exist:

As regards a vacuum in the philosophic sense of the word, i.e. a space in which there is
no substance, it is evident that such cannot exist, because the extension of space or
internal place, is not different from that of body.it is absolutely inconceivable that
nothing should possess extension, we ought to conclude also that the same is true of the
space which is supposed to be void, i.e. that since there is in it extension, there is
necessarily also substance.
31


In effect, Descartes argued that a void could not exist because a ruler would need to be
placed within it to measure it. Hence, the space could not be a void. Having argued that all space
must be filled with substance, he argued that the heavens must be filled with a liquid matter, the
substance that was commonly called aether. In this belief, Descartes was following in long
established late medieval tradition. He was merely reiterating that tradition.
Descartes argued that movement within the fluid medium, the aether, caused the motions
of the planets around the sun. Aether, long a subject for speculative philosophy and literature, was
acquiring functional significance in physical theory. Descartes theory became known as the
theory of aethereal vortices. Immersed in the aether, the planets rotated around the sun much as a
leaf floating on the surface of a stream would follow the swirls and eddies of the current.
Descartes may well have been the first to assign theoretical function to the centuries-old aether,

51
previously the stuff of philosophy and literature.
The problem with Descartes relativism is that one cannot employ any form of aethereal
hypothesis without at the same time admitting that both space and motion are less than totally
relative. At any point within the aether, the aether itself provides a consistent reference from
which space can be defined, just as water provides a medium into which one can anchor a boat.
One can suspend a three dimensional coordinate system in the aether, and anchor the coordinate
system to the aether. This is exactly what Descartes did to the sun, the moon, and the planets.
It was by reference to such an aether-anchored coordinate system that Maxwell defined
his concept of an electromagnetic field. Each point in the aether possessed certain qualities. At
each such point, the aether possessed a certain direction and degree of internal tension. This
tension was directed toward a particular source, the center of the electromagnetic field. Ones
coordinate system might move through the aether, but the aether itself kept the coordinate system
from tumbling wildly.
Because the coordinate system was anchored to prevent it from tumbling wildly, one
could center a coordinate system at the source (or center) of a field and use that coordinate system
to measure the qualities of the field at various points within it. One could distinguish between
points (x,y,z) and (x
1
,y
1
,z
1
).
Einstein retained Maxwells definition of field, attributing qualities to points in space, but
discarded faith in the existence of aether. He contended that apparently empty points in space
were, in fact, completely empty, but that they nonetheless possessed qualities assigned to them by
fields.
Returning to the analogy with the baseball field, it was now possible to have a field
without soil. In the absence of a soil-based field, one could never know whether one was running
toward first base, or whether it was actually first base that was running toward the person. All one
could know was that one was getting closer to first base. Such reasoning was Ernst Machs
contribution to theory.
Eliminating faith in the aether enabled Einstein to move toward a philosophy more
relativistic than anyone who had preceded him. However, he never became a pure relativist. This
was because he retained faith in Maxwells concept of field and his field equations, the belief that
field assigned qualities to points in space.
Einstein created a problem for himself, one that he never fully appreciated. Having
discarded faith in the existence of aether, he at the same time discarded the one perspective to
which he could anchor a coordinate system. His coordinate system was now free to tumble
wildly. Descartes anchor had been thrown overboard. Neither stability nor tumble could be
defined. One could never know whether ones coordinate system was tumbling or not, or to what
degree, or in what direction.
This inability to define raises a fundamental question about the three dimensional
coordinate system itself. If we cannot mathematically define the functioning of a three
dimensional coordinate system because we are unable to anchor it, then is such a system relevant
to theory? At the level of the most fundamental interaction, that being between two particles, can
we say that the three dimensional coordinate system has any relevance at all?
We can no longer test for qualities of points in space because we no longer have any
absolute or aether-anchored perspective to which we can attach a coordinate system. When
discussing movement from point to point, we can no longer distinguish between points (x,y,z) and
(x
1
,y
1
,z
1
) if the two points are located the same distance from the center of a field. We can no
longer discuss a relationship between two bodies by defining that relationship in terms of the

52
bodies positions within a mathematically three dimensional space.
Fundamental, two body relationships can and must be defined in terms of a single
dimension. This single dimension is the linear distance between the two bodies. We need no
longer attribute qualities to points in space so that we can explain field relationships between two
bodies. We cannot say that field exists independent of the experiencing of it. Field is experience,
which is all that our experimental techniques will allow us to test. Field cannot be an essence that
exists independently from experience.
In terms of the analogy with the baseball field, one need define ones movement in
relation to first base only by reference to ones linear distance from the base. One needs no
reference to any soil that might allegedly be below (or above) ones feet, or to any other feature on
the ball diamond. One cannot say that ones head is either above or below ones feet, or that either
oneself or the base is moving in any absolute sense. One can only measure what the distance is
between oneself and the base, whether this distance is increasing or decreasing, and at what rate.
This was the step that Newton the absolutist refused to take. This was also the step that
Einstein, the less-than-pure relativist, failed to take. This is the one last step that we must take if
we are to discard the final remnants of the aethereal hypothesis, and to embrace pure relativity.

C. TESTING THE FIELD QUALITIES OF POINTS IN SPACE

Insofar as possible, physical theory should try to avoid making statements that are by
their very definitions unverifiable. The aethereal hypothesis did this by claiming the existence of a
kind of matter that we cannot sense. It might be possible for us to experimentally infer the
existence of an insensible matter, even though we could not directly test either its existence or its
qualities. There is one obvious problem with such an experiment: Aether is confirmed to exist as
a matter of interpretation, not of observation. How can we experimentally verify the accuracy of
our interpretation?
Einsteinian field theory also made a statement that is, by its own definition, unverifiable.
Einstein did this by claiming that field could exist in spaces that were entirely devoid of matter.
Any attribution of quality to an otherwise empty point in space is, by its very definition, an
unverifiable statement. Einsteinian field theory is even less verifiable than the aethereal
hypothesis.
Let us consider what we would have to do to verify Einsteins concept of a field that
exists as a quality of empty space. So as to avoid contaminating our empty point, we put nothing
in it. We then test to see whether the nothing we put in the empty point is affected by the alleged
quality. If nothing is affected, then the hypothesized qualities are real. If nothing is unaffected,
then the hypothesis has been disproven.
The difficulty is obvious. If we put nothing in a point to be tested, then we get no results.
If we put something in the otherwise empty point, then we have contaminated the point. We are
no longer testing the qualities of an empty point. Rather, we are testing for the existence of an
interaction between two bodies.
We cannot conclude from the existence of such an interaction that the point being
occupied by the body would possess certain qualities in the absence of the body. We cannot rule
out the possibility that some previously existing interaction between the two bodies followed the
movements of the two bodies, much as a rubber band tying together two pencils would follow the
movements of the pencils.

53
Next, let us consider what we would have to do to define the location of a point in space
so that we can measure its qualities. Even given our willingness to contaminate a previously
empty point in space, is it possible for us to determine that we have, in fact, moved into such a
point?
For the purpose of this argument, let us establish a set consisting of all points located a
particular distance from a single point. We will define this single, central point as being the center
of a field. The set of points equidistant from this central point comprises a sphere. According to
contemporary field theory, all points on this sphere experience equal values of gravitational
attraction. For simplicity, we will use the gravitational field only. While the same general
argument applies to electromagnetic fields, applying it becomes more complex.
To test the hypothesis that a field assigns particular values to points in space, let us now
move randomly from point to point around the sphere. How can we know that we are, in fact,
moving? We cant.
To establish our movement or nonmovement from point to point around the sphere, we
must be able to define these points. This means that we must possess an anchored three
dimensional coordinate system. Because we have no absolute or aether-anchored perspective to
which we can attach our coordinate system, we cannot differentiate among the various points on
the sphere. We can never know whether we are moving or not. We can never know whether or
not we are moving into some previously empty point in space.
Perhaps we can pursue a different approach. Instead of thinking in terms of spatial
geometry, we can instead think in terms of linear geometry. We can only determine the existence
of relative motion if our distance from the center of the field is changing. This is a purely linear
change, along a line that connects the center of the field with the point whose qualities we are
trying to test. If we move from distance A to distance B, we can conclusively state that we have
moved relative to the center of the field. As this relative distance changes, we note that our
experience of field strength intensity changes also. But does this experience of change necessarily
mean that we are experiencing the qualities of different points in space? Or is it the change in
distance that itself produces the change in experience?
Later in this chapter, I derive the inverse squared laws of both gravitational and
electromagnetic fields. This derivation makes it apparent that we cannot test for the alleged
qualities of empty points in space by altering the distance between the center of the field and the
points at which we choose to test. This is because the relative movement may actually be causing
the experience of changing field strength.
To test for the alleged qualities of points in space, we must select two or more points that
share a common distance from the center of the field. This cannot be done. We have to admit that
the hypothesis that field assigns qualities to otherwise empty points in space cannot be verified.
Yet all contemporary theory is based upon this unverifiable assumption! There can be no
proof from directly within field theory, or relating solely to field interactions. There is, however,
one proof that has been attempted through general relativity, although the proof was really
intended to prove general relativity rather than the contemporary concept of field. This attempted
proof of general relativity assumed that the contemporary, spatial concept of field was correct and
not in need of experimental validation. If general relativity can be proven through this
experiment, then the proof of general relativity will lend credence to belief that gravitational fields
assign qualities to points in space. This credence assumes, of course, that our interpretation of
observational data is correct.

54
Einstein contended that gravitational fields alter the values of the units of time and
distance at various points within them. The greater the strength of the field at any given point, the
greater the alteration of the values of time and distance. These altered values would slow the
velocity of light, just as passage through glass also slows the velocity of light. In effect, gravity
has its own index of refraction analogous to that of glass. However, unlike glass, this index of
refraction is not a constant that is a property of a given material. The index of refraction of gravity
exists as a gradient that is a function of the strength of gravitation at any specified point within the
gravitational field. The stronger that gravitation is at any given point within the field, the greater
the degree to which the passage of light will be slowed at that point. The stronger the gravitational
field strength is, the greater the degree to which it will refract light.
Gravitational fields are rounded just as the celestial spheres are rounded, with increasing
intensities of gravitational field strength as one passes closer to the body. Therefore, empty space
in the vicinity of the body would function like the rounded surface of a piece of glass, a
refractively convex glass lens. The gravitational field would bend the path of light passing
through it.
32

Such a claim was already inherent in the aethereal model from which Maxwell derived
his field equations. Maxwell acknowledged that velocity was a function of the elastic tension of
the medium through which an electromagnetic wave passed. Elasticity is altered by the degrees of
tension that the medium experiences at various points within it. Increasing tension will increase
the velocity of a wave passing through the material medium. Tensions that are unevenly
distributed, as would be the case with electromagnetic fields within the aether, would necessarily
have refractive effects.
As a practical demonstration of this, let us further stretch an already stretched rubber
band. The resonant frequency of the rubber band rises, even though it is now further from one end
to the other. This is because the wave motion now travels faster from end to end than it did
before, increasing the resonant frequency. This principle is quite familiar to every pop musician.
One tunes a guitar string by altering its tension, thereby altering the velocity with which waves
travel the length of the string.
Maxwell acknowledged that wave velocity was a function of elastic tension. He also
claimed that electromagnetic fields altered the elastic tension of the aethereal medium at various
points within them. Therefore, this altered elastic tension should alter the velocity of light,
producing refractive effects. However, the increased elastic tension near the center of the
electromagnetic field should speed rather than slow the passage of light, producing the effect of a
concave lens rather than a convex lens.
Any theory based upon Maxwells field equations should therefore predict that fields
would produce refractive effects. According to the aethereal hypothesis, it was the
electromagnetic field that should produce such effects. According to Einsteins general relativity,
it should be the gravitational field.
Proof of Einsteins general relativity was attempted during the solar eclipse of May 29,
1919. The Royal Society sent British astronomer Eddington and others to photograph the eclipse
so that the positions of stars in the background could be measured from the photographs. If
general relativity was correct, then the photographed positions of the stars should deviate from
their actual positions, this deviation having been produced by the suns refractive gravitational
field. Although initially credited with having proven general relativity, the results have more
recently been determined to have been indecisive.
33


55
In the absence of any proof to the contrary, I contend that it is possible to entirely
eliminate the assumption that field assigns qualities to points in space. Such elimination will
enable us to develop a mathematical philosophy that will be entirely relative. To develop such a
relative philosophy, we must first determine exactly what it is that we can mathematically define.

D. WHAT CAN BE MATHEMATICALLY DEFINED

Let us take two points and connect them with a single, straight line. This straight line
constitutes a single dimensional perspective. This single dimensional perspective unites two
bodies, one at each end. We can consider these bodies to be particles.
Because we have no absolute or aether-anchored coordinate system from which to
measure the absolute movement of this single dimensional perspective, we can never determine
whether the perspective is moving or not. Neither its motion nor its nonmotion can ever be
determined relative to any perspective that is more fundamental than its own. If we attempted to
attribute any motion or nonmotion to it, we would be violating the requirements of rigorous
mathematical definition.
In and of itself, this perspective is the most basic of all perspectives, but only for events
that take place along it. As the basic, functional perspective of physical theory, the individual,
single dimensional perspective becomes successor to the three dimensional coordinate system.
We can then place a second single dimensional perspective in relation to the first single
dimensional perspective. We can say that the two perspectives move relative to each other.
However, relative movement can in no way establish the existence of the absolute movement of
either perspective.
Regardless of the number of perspectives that may move relative to each other, we can
never say that any one perspective is either in motion or at rest, just as we can never say that a
coordinate system is tumbling or at rest. We have no reason to distinguish between the motion
and the nonmotion of any one perspective. Both terms are meaningless.
If we have two single dimensional perspectives, then the two must consist of either three
or four points, depending upon whether they share a common point. Each point maintains its own
relationship via a single dimensional perspective with each of the other points, just as every
particle in the universe relates to every other particle in the universe by such a relationship. There
can never be just two perspectives. There can be 1, 3, 6, 10, 15.
We can hypothesize such a universe in which all particles relate to all other particles via
single dimensional relationships. If all such relationships are themselves single dimensional in
nature, then we need not invoke the concept of qualitatively relevant space for the purpose of
explaining the relationships. All relationships are particle-to-particle, which can be defined
conceptually in terms of a single dimension. At the most fundamental level of theory, there is no
need to talk in terms of three dimensions if a single dimension will suffice. Additional dimensions
merely add unnecessary complexity.
Furthermore, we have no reason at all to distinguish between matter and energy. The
concepts of matter and energy as we distinguish them from each other are not themselves based
upon or derivable from mathematics. The concepts of matter and energy are cultural, not
mathematical. At their most archetypal level, matter and energy are noun and verb. Energy is
merely a verb that has been converted to a noun. This can be done with any verb. It must be done
when we treat the verb as being the subject of conversation.

56
The division of physical reality into matter and energy mirrors the division of human
language into nouns and verbs. Where I use either term in this and subsequent chapters, I am
referring to matter/energy. Although I use the terms separately for the purpose of clarifying
explanation, I do not really distinguish between the two. I am merely employing linguistic
convention. I am a prisoner of my language. There is only so much that I can do to change it.
Having reduced physical reality to a set of single dimensional relationships, each such
relationship being between two and only two particles, it is now possible to hypothesize a reality
in which each such relationship consists of attractive and/or repulsive information flowing back
and forth between the particles. This means that no point along that line, regardless of its distance
from either of the two particles, possesses any quality of its own.
We face a situation from within which we cannot use the three dimensional coordinate
system with any reasonable degree of mathematical definition. What shall we do if we discard the
coordinate system as being theoretically irrelevant? We must employ either one of two
hypotheses, both of which are single dimensional. Neither can be directly proven. Either:

(A) points along the line must possess qualities independent of our experiencing of them.
Each point along the line retains the same qualities that it presently has under the spatial
concept of field. This invites us to retain all qualities of the Einsteinian concept of field,
merely deleting two of the three dimensions of the coordinate system; or

(B) information must flow between the two points.

The former, the qualitative model, forces us to continue distorting the units of time and
distance, as Einstein did in special relativity. Furthermore, this model forces us to deal with an
additional difficulty. The line extends from one point to a second point. These two points define
the two ends of the line. What happens when the line is lengthened or shortened? If the line is
lengthened, how do the qualities of the added points come into existence? If the line is shortened,
what happens to the qualities of the deleted points? Or should we extend the line indefinitely
beyond the particles an infinite distance in both directions, thereby including points that theory is
required to consider, but that have no functional relevance.
The latter, the information flow model, allows us to restore classical constancy to the
units of time and distance. In a conceptual sense, the latter is preferable to the former because it
offers greater possibility for simple explanation. Simplicity is of theoretical value in its own right,
if for no other reason than the intellectual limitations we mere mortals possess. There are limits to
the degrees of complexity that our feeble intellects can handle. We should not equate this with
some cosmological ideal of simplicity, with belief that God acts in simple ways. Our simplicity
may (or may not) be Gods simplicity. We should bear in mind that we are evolved beings whose
ultimate task is adaptation, not understanding. Adaptation is best served by simplicity.
The information flow model states that the two particles act as mirrors that continually
reflect this information back and forth. Because this information remains within the single
dimensional perspective, the movement/nonmovement of which is undefinable,
matter/energy/information must be conserved within this perspective.
Previously, physics has concerned itself with matter as structure. I suggest that the
concept of structure is unnecessary. All that is of interest is the experiencing of interaction among
particles. This being the case, I used light as being the basis for my paradigm of field, the

57
information flow model. Light was the basis for my hypothesis that field consists of information
traveling between two points.
We know that light travels from particle to particle and has an incredible propensity to
strike particles that are sparsely distributed in otherwise empty space. This propensity is vastly
greater than random probabilities would suggest. Obviously, something is happening that is not
random. This was noted and debated among theorists during the 1920s. Theory clearly suggested
that the movement of photons should be spatially random. Evidence clearly showed otherwise.
Where theory and physical evidence clearly contradict each other, one can pursue either
of two courses of action. One can develop some complex means by which one can fit
contradictory evidence into existing thought, much as one might try to shoehorn an elephants foot
into a kids sneaker. This is not to say that we dont love our old sneakers. Or one can challenge
the previously existing theory. To develop an alternative theory, one can begin with that evidence
that most clearly contradicts the existing theory.
The incredible propensity of photons to strike electrons suggests the existence of a single
dimensional relationship between the sending and receiving particles. If light is merely a
manifestation of field phenomena, then it may be impossible to conceptually separate light from
field. Light and field are the same phenomenon, as this chapter will explain.
Having suggested new paradigm, it is now our task to explore the mathematical
implications of conservation. Conservation is really nothing more than our justification for the
unquestioned use of the equals sign. Any statement that is derivable from the principle of
conservation is derivable from the nature of mathematics itself. The validity of the statement is
not predicated upon anything physically existent that manifests the statement.
One can hypothesize certain qualities of matter and its relationships without ever stating
that matter itself must exist. In essence, this is what Einstein attempted to do in general relativity.
The Lorentz transformation was itself derived from the principle of conservation. Einstein then
used the Lorentz transformation to derive the inverse squared law of gravity. This meant that the
inverse squared law was itself derivable from the conservation principle regardless of whether
gravity itself actually existed.
I contend that Einstein entered the back door and took an unnecessarily complicated route
to this derivation. Derivation is actually quite simple if one employs the premise that the universe
consists of a set of single dimensional relationships, and that each such relationship can be defined
in terms of attractive and/or repulsive information flowing back and forth within a single
dimension.
In such a situation, the only definable quantities are distance (the distance between the
two particles) and time. When one considers both time and distance, one can measure or derive
two statements. First, one can consider the instantaneous mutual velocity of the particles toward
or away from each other. Using the conservation principle, a simple derivation based upon this
velocity will give us the mathematical definition of kinetic energy.
Second, one can consider the instantaneous rate of change in this velocity, their mutual
rate of acceleration or deceleration. This latter is what the law of gravitational/electromagnetic
field strength actually is. This, too, is derivable from the conservation principle.
All basic physical principles should be derivable from distance, instantaneous mutual
velocity, and the instantaneous rate of acceleration or deceleration. All statements regarding the
latter two should be derivable from the principle of conservation. Such a structure of theory
would be both simpler and more comprehensive than general relativity.


58
E. THE MEANING OF THE BIG BANG

Twentieth century theorists have hypothesized that all matter and energy that physically
exists originated at the Big Bang. There are two possible hypotheses concerning the meaning of
the Big Bang. I argue in favor of the second.
Do we experience everything as having originated at the Big Bang because everything
that physically exists did, in fact, originate at the Big Bang? Or is it because we can only
experience that which originated at the Big Bang, regardless of whatever may exist or where?
The spatial and the single dimensional premises support opposing hypotheses. According
to the spatial premise, which assigns qualities of field to otherwise empty points in space, any
body that possesses mass and/or charge, and that occupies a point, will be experienced as existing,
regardless of its origin.
A body that possesses mass will be experienced by a gravitational field, regardless of
where the body originated. If this is the case and all bodies that we experience originated at the
Big Bang, then all mass that physically exists must have originated at the Big Bang.
The single dimensional premise clearly supports the opposing hypothesis. According to
the single dimensional premise, information must flow along a line connecting two particles. This
being the case, both the perspective and the information flowing within it must have originated at
some time and place. The perspective and its information must be traceable either directly or by
antecedent back to a common point of origin, a Big Bang.
We can only experience that which shared a common point of origin with us. Therefore,
we can only experience that which originated at the Big Bang. The Big Bang constitutes its own
wholly closed system.
To grasp in practical terms what this means, let us hypothesize the existence of a
baseball. Ill place it on your desk. It consists of matter that did not originate at the Big Bang.
Therefore, you cannot experience its presence.
The baseball possesses mass, but does not interact with our gravitational fields. Better
hold it down. It possesses electrons, but these electrons do not interact with our photons. So you
cant see it. In fact, it does not interact with our world of matter/energy in any way. Dont bother
trying to hold it down. It can go right through your hand, and you will never know it.
You cant interact with this baseball at all. You cant explain in terms of accepted
contemporary theory why you cant. If the baseball is real and you cant experience its presence, a
matter about which you cannot be certain, then the most fundamental concepts of physical theory
must be reconsidered. Included are the natures of time, distance, space, the meaning of relative
inertial movement, the nature of field, the movement of light, the meaning of quantum, and the
nature of electricity.
Let us proceed with this reconsideration.

F. THERMODYNAMIC TRANSFORMATIONS

As nineteenth century physics progressed, the water wheel was used to explain the steam
engine. The question that initially interested thermodynamicists was one of efficiency. Was there
some upper limit to the efficiency that the steam engine could achieve?

59
The earliest thermodynamicists believed that heat was a subtle fluid, caloric, that flowed
from bodies of higher heat to bodies of lower heat, just as water fell from a higher to a lower
elevation. The steam engine harnessed this flow, just as the water wheel harnessed falling or
flowing water. By the 1840s, it had become apparent that heat was not literally like water
because, as J ames J oule had discovered, heat itself was being transformed into mechanical power.
The water wheel involved no such transformation at least no apparent transformation.
Consequently, water power lost its place as the literal model for the steam engine. It was retained
instead as the archetypal guide that was not literally correct, but that could nonetheless be used for
explanation.
I propose to reverse the course of scientific history. Rather than using the water wheel to
explain the steam engine, I propose to use the steam engine to explain the water wheel. In doing
so, I will demonstrate that the steam engine and the water wheel both employ exactly the same
principle, the steam engine applying this principle to electromagnetic events, the water wheel to
gravitational events.
In both cases, the common principle can be derived from the question from which
Einstein developed special relativity, the question of relative inertial motion. The principle of
qualitative transformation that I derive from the question is merely a reinterpretation of the
significance of the question underlying special relativity, and hence an alternative to special
relativity.
Special relativity states that transiting from one inertial perspective to another inertial
perspective requires that one alter the values of the units of time and distance so that energy will
be conserved. I substitute the hypothesis that any unit of information that transits from one
inertial perspective to another inertial perspective must alter the mutual velocity of the sending
and receiving particles.
Altered mutual velocity rather than alteration in the values of the units of time and
distance is what explains conservation. Because altered mutual velocity does explain
conservation, both the principle of thermodynamic transformation (back and forth between the
radiant energy of photons and mechanical power) and the inverse squared law of field strength can
be derived from it. Substituting the principle of qualitative transformation for special relativity
allows us to derive these from a common and self-evident statement via the conservation principle.
In other words, we apply the equals sign.
To illustrate the alteration of mutual velocity, let us take two particles and a photon that
moves back and forth between them. The photon strikes each particle repulsively, is then reflected
back to the other particle, repulsively striking both particles. Back and forth. Back and forth.
Let us begin with a situation in which the two particles are moving away from each other.
As the photon repeatedly strikes the two particles, it increases the mutual velocity of the particles
away from each other. This increase in mutual velocity is accompanied by a decrease in the
energy value of the photon. Consequently, electromagnetic radiation is being transformed into
mechanical momentum.
Let us now reverse the situation. Exactly the same situation exists, except that the two
particles are now moving toward each other. The repetitively repulsive impacts of the photon with
the particles reduce their mutual velocity toward each other. This decrease in mutual velocity is
accompanied by an increase in the energy value of the photon. Consequently, mechanical
momentum is being transformed into electromagnetic radiation.
In both cases, conservation is explained by qualitative transformation. The photon cannot
transit from the inertial perspective of one particle to the inertial perspective of the other particle

60
without producing such a transformation. Alteration of the energy value of the photon is
accompanied by alteration of the mutual velocity of the two particles. One alteration equals the
other. In addition to explaining the steam engine, mathematical derivations from this relationship
of conservation will give us both the inverse squared law and the definition of energy. Ultimately,
it will even explain the photon itself.

G. THE IMPLICATIONS OF COMPOUNDING

It should be apparent that any increase or decrease in photon energy that results from a
single emission-reflection (or emission-absorbtion) event involving two particles and a photon will
be infinitesimally small. This is because the mutual velocity of any two particles in our everyday
technologies is infinitesimally small relative to the velocity of light.
However, the distances between the particles are also extremely small relative to the
velocity of light. Therefore, a single photon can be emitted and reflected several billion times per
second. The practical transformations that we experience in our everyday technologies result from
infinitesimally small changes that are compounded at a nearly infinite rate.
Unless one contends that the photon is occasionally removed from experience for some
period of time (as would happen with quantum absorbtion and emission), the velocity of light
itself is quite irrelevant to the transformational process until one achieves mutual particulate
velocity that is significant relative to the velocity of light.
If the velocity of light were either higher or lower, then the number of compounding
events per unit of time would change proportionally. At the same time, however, the degree of
transformation per event would change inversely relative to the number of events per unit of time.
The two inverses multiplied together cancel any effects that either might have. The actual velocity
of light is therefore unimportant.
The same relationship, a multiplication of inverses, also applies to the number of
transformational passages back and forth that will be required to produce a particular degree of
change in mutual particulate velocity. The greater the intensity of the radiation, the fewer the
number of compounding events that will be required to produce a given change in mutual velocity.
In other words, the greater the intensity of the electromagnetic radiation, the greater the
speed at which a particular transformation will take place. However, achieving particular mutual
velocities involving a definite number of particles will still require exactly the same quantity of
transformed radiation.
Because of this enormous rate of compounding, individual events that appear to be too
small to have any practical significance at all do, in fact, have enormous everyday significance.
This enormous rate of compounding is what gives everyday significance to the events that
Einstein considered in the question underlying his special theory of relativity.
Einstein failed to consider the implications of this compounding, and therefore failed to
note the significance of the events considered by special relativity to everyday technologies.
Consequently, Einstein placed the events associated with special relativity at the fringes of human
experience rather than at the core of our everyday technologies.
Einsteins special relativity and my principle of qualitative transformation offer two
opposing and mutually exclusive solutions to the same problem, that of the transition from one
inertial perspective to another. When such a transition takes place, as when a photon passes from
one particle to a second that is moving toward or away from the first, one of two things must

61
2
2
0
1
C
V
E

happen:

A) The photon emission and subsequent reception must alter the mutual velocity of the
two particles. This alteration must be accompanied by a corresponding change in the
intensity of the photon. Photon energy must be transformed into mechanical
momentum, or vice versa, thereby explaining conservation; or

B) Conservation must be explained by altering the values of the units of time and
distance whenever transit from one inertial perspective to another takes place. If
conservation is to be explained by altering the values of the units of time and
distance, then one must contend that photon emission and reception does not alter the
mutual velocity of the two particles. To claim both would be to violate conservation.

Einstein argued the latter. In doing so, he contradicted his own theory of the photon. The
problem that led him to formulate the photon was that of the force of photon impact. It had
already been discovered that light of higher frequencies could cause metals in a vacuum to
discharge electrons from their surfaces. Einstein reasoned that these emissions took place because
photons impacted with some degree of mechanical energy.
When arguing in favor of special relativity, Einstein took exactly the opposite position:

A body with velocity V, which absorbs an amount of energy in the form of radiation
without suffering an alteration of velocity in the process [emphasis added], has, as a

consequence, its energy increased by an amount .
34




This increase would take the form of an increase in inertial mass. However, he then
noted that such an increase was beyond experimental ability at that time (1920). Because inertial
mass would increase rather than mutual velocity, this increase would not have been relevant to
everyday technologies such as the steam engine and the internal combustion engine. The
transformation that Einstein hypothesized would have been of no practical value to technology.

A direct comparison of this relationship with experiment is not possible at the present
time owing to the fact that the changes in energy E
0
to which we can subject a system are
not large enough to make themselves perceptible as a change in the inertial mass of the
system.
35


62
2
2
1
MV E =
H. DERIVING THE FORMULA FOR KINETIC ENERGY

Qualitative transformation leads us directly to a simple derivation of the classical

definition of kinetic energy, . If the mutual velocity of two bodies is reduced from

velocity V to zero, then this is the amount of energy that one will obtain. The results of this one
equation are expressed in the units of the energy of motion. This energy of motion will remain
constant for as long as the mutual velocity of the two bodies remains constant. No transformation
takes place as long as the mutual velocity remains constant.
If mutual velocity is reduced to zero, then this energy of motion must be transformed into
some other form of energy. Conservation states that energy of one form, if reduced in quantity,
must result in energy of some other form. Transformational equivalences tell us how much of one
form of energy must equal how much of another form. Energy in this other form must show up on
the other side of the equals sign, opposite kinetic energy. In the case of the formula for kinetic
energy, the terms on the right give us a resulting number on the left. The units applied to that
number tell us that it is kinetic energy. If velocity (V) is reduced to zero, then some other form of
energy must appear on the right side of the formula. This will not be kinetic energy, but will be
equivalent to kinetic energy.
The energy of mutual motion will be transformed into either heat or electricity.
Ultimately, electromagnetic radiation (heat) and electricity prove to be the same because both
share a common mathematical derivation. As this chapter later explains, their existences are both
predicted by the conservation principle as applied to the same situation.
Let us examine the degree of transformation that will take place at any instantaneous
mutual velocity that may exist between two particles. The resulting rule is quite simple: The
degree of transformation at any given mutual velocity is proportional to the mutual velocity itself.
Let us briefly state this rule in the language of the principle of qualitative transformation, before
deviating into the more conventional use of contemporary terminology. At one unit of mutual
velocity, one unit of information would be required to produce one unit of change in mutual
velocity. At six units of mutual velocity, six units of information would be required to produce the
same one unit of change in mutual velocity.
As a practical illustration, one can observe a braking automobile. Doubling vehicle speed
doubles the braking time, but quadruples braking distance - even though the same force is applied
to the brakes. The time required to brake from 100 kilometer per hour (km/hr) to 80 km/hr is the
same as the time required to brake from 20 km/hr to zero. However, the distances traveled during
braking will differ greatly. This is because the vehicle braking from 100 km/hr will average 90
km/hr while braking, while the vehicle braking from 20 km/hr will average 10 km/hr. The faster
vehicle will travel nine times farther than the slower vehicle while braking, even though both are
braking by the same 20 km/hr in the same amount of time.
A similar applies to heat buildup on the surfaces of the brake pads. At 90 km/hr, the
brake pads travel across an opposing metal surface at nine times the velocity as at 10 km/hr. This
means that heat builds up nine times faster. One km/hr of braking at 90 km/hr produces nine times
as much heat as that same km/hr at 10 km/hr. Kinetic energy is being transformed to heat in
accordance with the classical definition of kinetic energy.

63
2
2
1
MV E =
Having illustrated the principle using an everyday technology, let us now use purely
mathematical reasoning to derive what will happen to a photon that is being reflected back and
forth between two particles. We have already noted that the velocity of the photon is vastly
greater than the mutual velocity of any two particles. Therefore, any change in photon energy
that results from within single emission-reflection (or emission-absorbtion) event will be
extremely low.
At one unit of mutual particulate velocity, the energy of the photon is altered by one
corresponding unit of photon energy. This produces one unit of change in the mutual velocity of
the particles. At six units of mutual particulate velocity, the energy of the photon is altered by six
units of photon energy. However, this produces only one unit of change in mutual particulate
velocity.
Whenever mutual particulate velocity is altered but not reduced to zero, this change in
photon energy is proportional to the average of the initial and final mutual velocities of the two
particles. A change in mutual velocity from four units to six units of velocity would require an
average of five units of photon energy per unit of change, or a total of ten units of photon energy
to increase mutual particulate velocity by two units.
Now, let us apply this rule of proportionality to a situation in which the mutual velocity is
reduced to zero. The change in mutual velocity from any mutual velocity to zero will transform
all of the two particles mutual kinetic energy into either heat or electricity. The total kinetic
energy present in the relationship between the two particles can be defined by reference to the total
amount of information (heat or electricity) that would result if mutual particulate velocity was
reduced to zero.
The classical formula for kinetic energy mentions kinetic energy only, and makes no
mention of heat. Energy is a generic term that applies to several forms of energy. Energy is the
unifying concept that allows us to equate one form of energy with any other form. Once we have
quantified one form of energy, we can then convert that quantity into an equivalent quantity of
some other form of energy. This was what J ames J oule did when he experimentally determined
the heat equivalent of mechanical energy, some 165 years ago. In this derivation, this is what I am
doing with kinetic energy.
One can derive the mathematical formula for kinetic energy by applying the rule of
proportionality of the rate of transformation (rate is proportional to mutual velocity) to the present
mutual velocity of two particles. Then reduce mutual velocity to zero. The formula that results is

a familiar one: . Reducing mutual velocity from six units to zero would produce 18

units of information (six units of mutual velocity times an average of three units of information
per unit of velocity).
The classical definition of kinetic energy in terms of mass and velocity emerges directly
from the principle of qualitative transformation, which is nothing but a statement of the
conservation principle. In other words, we can use the equals sign. As such, the definition does
not actually require that either matter or energy exist. However, the definition must apply to
matter and energy if they do exist. If mutual velocity was altered and transformation did not take
place, then energy would not be conserved.
Historically, it should be noted that the classical definition of kinetic energy was derived
from observation and experimentation. It was initially believed that momentum and kinetic

64
c
v
1
1
energy were the same. In other words, E=MV. My derivation is based upon mathematical
necessity, not upon observation and experimentation.
I have derived the definition of kinetic energy from an incomplete assumption. I have
considered only the effect of mutual particulate velocity. I have not considered the effect of
mutual particulate acceleration. Note that the classical definition of kinetic energy mentions only
mutual velocity, and does not consider mutual acceleration.
I have not considered a second phenomenon: The particles may accelerate away from or
toward each other while the photon is in transit. If two particles are accelerating away from each
other, the photon will impact with less force than if the mutual velocity had remained constant.
This means that additional photon energy will be required to produce a given change in mutual
particulate velocity.
If the two particles are accelerated toward each other, the photon will impact with greater
force than if the mutual velocity had remained constant. This means that less photon energy will
be required to produce a given change in mutual particulate velocity.
Mutual velocity alone produces the classical definition of kinetic energy. Mutual
acceleration added to mutual velocity produces the effects predicted by Einsteinian relativity. At
any given mutual velocity, there is a difference between kinetic energy as predicted by the
classical definition, and as predicted by Einsteinian relativity.
These differences will accumulate, but not significantly until mutual velocity becomes
significant relative to the velocity of light. At each increment of mutual velocity, the energy

value of that increment must be multiplied by to adjust its value to compensate for this
second phenomenon.

Note that this adjustment factor considers velocity only, not acceleration. This is because
the effects of acceleration are cumulative without regard to the rate of acceleration. Hence
acceleration, which is rate, is irrelevant. Regardless of how slowly or how quickly the effects of
acceleration accumulate, the cumulative effect at any mutual velocity will be the same. The
formula stated above is for particles that accelerate away from each other. For those that
accelerate toward each other, the negative sign in the denominator becomes positive.
By examining the adjustment factor, one can see that the net effect of this second
phenomenon will be insignificant at the mutual velocities we encounter in our everyday
technologies. Therefore, we can continue to apply the classical formula for energy without
significant error.
As is also the case with the relationship of photon intensity to transformational potential,
the adjustment factor relates only to the mutual velocity itself, and is in no way related to the
number of compounding transformational events that may be required to reach this velocity.
This adjustment factor will produce an effect similar to that predicted by the Lorentz
transformation, the basis of Einsteins theory of special relativity: No two bodies can be mutually
accelerated to a mutual velocity (away from each other) equal to or exceeding the speed of light
solely by transforming the information that they share between them. This is because the amount
of information that must be transformed to achieve a mutual velocity equal to the speed of light
would be infinite.
However, unlike Einsteinian relativity, the mathematics of qualitative transformation
does not establish the velocity of light as being a limiting factor in any absolute, cosmological
sense. Imagine what would happen if one inserted a third body between the two bodies that are

65
being accelerated away from each other. Each can now be accelerated relative to this intermediate
body, but still away from each other.
The limiting mutual velocity for the two bodies is now twice the speed of light.
Mathematically, it is now fairly easy to achieve mutual velocities that exceed the speed of light. It
should be noted, however, that if two bodies were accelerated away from each other to a mutual
velocity exceeding the speed of light, they would then cease to experience each other.
Two gaps in the linear continuity of information would develop along their mutual line of
interaction. They would not be able to experience each other again until these gaps were closed, at
which time continuity of mutual experience would be restored.
This three-body arrangement does raise troubling questions about the absolute nature of
conservation in carefully managed, multibody events. It may become necessary to define certain
quantities in purely relative terms, which means that conservation could no longer be defined in
universal terms as a cosmological absolute. There may be situations in which we cannot apply the
equals sign without question. Absolute conservation may be confined to events involving only
two particles.
The greater the number of particles and the more variegated their relationships, the
greater the likelihood that physical reality will deviate from absolute conservation. Whenever any
such deviation takes place, it must take place because all individual events have respected the
conservation principle, each individually. However, the many centers of mass from which
momentum has been conserved have been in motion relative to each other. It is this relative
motion among the many centers of mass that causes the deviation.
However, this does not mean that we would be able to quantify any loss or gain in either
matter or energy. We would have no valid perspective from which to quantify. While we must
qualify our use of the equals sign and thereby restrict its use, we can in no way extend use of the
equals sign to any events that may involve violation of the conservation of matter and energy. To
quantify, we must use the equals sign. This we cannot do except where conservation takes place.

I. HISTORY OF THE CONCEPT OF FIELD

The claim that field should have a physical essence independent of our experience of it
was implicit in the archetypal origin of the concept: An agricultural field was present whether the
farmer was present to plant it or not. Dirt has an essence, whether we experience it or not. The
claim was also implicit in Maxwells use of the aethereal hypothesis to derive his field equations:
Tensions would be present within the aethereal medium, whether we actually experienced them or
not.
One is reminded of the old philosophical question: Does a tree falling in the forest make a
noise if there is nobody present to hear it? Scientific common sense clearly suggests that it does.
However, the epistemology of field is considerably more complex than either this question or the
simple analogy with agricultural soil suggests.
Does a tree falling in the forest make a noise if it is not possible for someone to enter the
forest to hear it? Would a falling tree make a noise if there were no forest floor for it to impact?
What if it is not even possible to verify that the forest exists? What is the sound of one hand
clapping?
Are we now talking in terminology suitable for sound scientific debate? Or have we
instead become a neomedieval debating society? Where should Schrodingers cat properly reside?

66
Before Einstein advocated the idea that field assigned qualities to empty space, it was not
believed that an empty space could have any qualities. There were two kinds of points that could
exist. There was the material point that was occupied by matter. There was the empty point that
was entirely empty, a void. Most believed that such an empty point could not exist.
Points must be of one type or the other. Either points were occupied by matter, or they
werent. If points were occupied by matter, then the matter at each point could be affected by
events that included it. If a point was empty, then it could not be affected at all by any event or
phenomenon. The presence of phenomena at any given point, whether this phenomena remained
or was merely passing through, was conditioned upon the presence of matter.
In its origin, the concept of field presupposed the presence of matter. It was not a
concept that assigned qualities to otherwise empty points in space. Field could not exist at any
point unless matter also existed at that point. The existence of field at any given point was taken
to mean that matter must also exist at that same point. One could infer the existence of matter
from the presence of field. Based upon such an inference, the aethereal hypothesis proposed the
existence of matter that we could not sense. The concept of field originated as a description of the
states of this matter at various material points within it.
As Einstein explained the development of the concept of field:

where no matter was available there could also exist no field. But in the first quarter
of the nineteenth century it was shown that the phenomena of the interference and motion
of light could be explained with astonishing clearness when light was regarded as a wave-
field, completely analogous to the mechanical vibration field in an elastic solid body. It
was thus felt necessary to introduce a field that could also exist in empty space in the
absence of ponderable matter. [i.e. ordinary matter that we can sense]

This state of affairs created a paradoxical situation, because, in accordance with its
origin, the field concept appeared to be restricted to the description of states in the inside
of a ponderable body. Thus it seemed to be all the more certain, inasmuch as the
conviction was held that every field is to be regarded as a state capable of mechanical
interpretation, and this presupposed the presence of matter. One thus felt compelled,
even in space which had hitherto been regarded as empty, to assume that everywhere the
existence of a form of matter, which was called aether.
36


In these paragraphs, Einstein implies that the concept of an insensible aether was created
to fulfill the requirements of mechanical explanation. Mechanical explanation required the
presence of a material medium, the various parts of which might move or be stressed according to
mechanical principles. This statement regarding origin is misleading, a fact that Einstein
acknowledges elsewhere.
What actually happened was that the dictates of mechanical interpretation merged with
philosophical statements that had dominated Western thought since Aristotle. As quoted earlier in
this chapter, Descartes had argued that there could be no space defined in three dimensional terms
(Descartes called this extension) unless matter filled the space. The definition of any space was
conditional upon the presence of matter. If there was no matter, then there could be no space.
Hence, there could be no void.
Descartes argument was hardly new. It was, in fact, ancient. Space itself could be
likened to a rubber balloon. A balloon can only fill space if it is filled with air, i.e. matter. It is

67
the air within the balloon that allows it to assume its size and dimensions. To use Descartes
language, it is the air within that gives the balloon its extension.
This kind of reasoning merged well into Maxwells aethereal explanation of both light
and field. Light waves could not be transmitted thorough a void. A field could not exist within a
void. A void possessed no material medium into which mechanical tensions could be projected, or
though which a mechanical wave could pass.
Hence, a void was a space that was totally devoid of all experience. If one was to place a
particle in a void, surrounded on all sides by emptiness, it could experience nothing. It would be
unable to experience either electromagnetic field or light. Even though matter surrounded the
void, the lone particle would be unable to experience the existence of a universe. It would
experience total aloneness.
At no time did the aethereal hypothesis, or any philosophical statement associated with it,
claim that essence and experience could be separated from each other. The essence of both field
and radiation was that they were mechanical experiences. Therefore, their essences presumed the
presence of experiencing matter, even though this experiencing matter was beyond the reach of
our own experience.
The existence of experience-capable matter was a necessary prerequisite for the existence
of any mechanical phenomenon. Sound was a mechanical phenomenon. It had already been
demonstrated that could not be propagated through a vacuum because there were no particles of
ordinary matter present to experience the passage of sound waves. A lone molecule in a vacuum
could not experience sound.
Aether filled all space, interpenetrating the ordinary matter that we could sense. Hence,
no void could exist. Wherever there was either field or light, there was matter present that could
and did experience the phenomenon. It was the fact that matter did experience the phenomena of
light and field that allowed these phenomena to exist, and to be extended and propagated through
space. Field was like Descartes concept of space. It could not possess extension unless there was
matter present through which extension could take place.
The aethereal hypothesis itself was the first stage of an intellectual process whereby the
physical essence of field and light was separated from human experience, and hence from direct
experimental validation. One could experiment and then infer. The results of experimentation
were therefore inferential in nature, based upon presupposition. It was believed that field and light
phenomena were experienced as mechanical events by a form of matter that man could not sense,
but whose existence we could infer. Hence, man could not directly experience the mechanical
events themselves, because we could not sense the matter within which the events took place.
Man could only experience the results of these events as they affected his world of ordinary
matter.
As Einstein correctly perceived, the existence of aether had never been proven. By
definition, it could not be proven to exist. Aether was alleged to exist because the phenomena of
field and wavelike light could only be explained in terms of mechanical principle if there was such
a medium. One could eliminate faith in the existence of this hypothetical medium if one was
willing to drop the requirement that all phenomena be explained in terms of mechanical (generally
called dynamical) principle.
Einstein was the first to argue that the aethereal hypothesis was unnecessary. Light could
pass through empty space without requiring the presence of some material intermediary. Field
could project its qualities through empty space, similarly without the presence of a material
intermediary. For the first time in the history of physical theory, it became possible to hypothesize

68
the existence of an essence, field, that was wholly separable from the presence of matter that
would experience it. One could have essence entirely without experience.
This was the second stage in the intellectual process whereby the physical essence of
field and light was separated from human experience: If one dropped the requirement for
mechanical explanation, then one need not require that either us or matter experience the
phenomena. The phenomena themselves could exist entirely independently of any experience of
it, whether the experience be material or human.
This was not as great a conceptual advance as it may seem at first. In fact, it retained the
most disturbing characteristic of the aethereal hypothesis: Neither the aethereal hypothesis nor its
Einsteinian successor were subject to empirical proof. Aether was, by its very definition, beyond
the realm of human experience. Empty space is also beyond human experience. As soon as one
places a particle in an empty space to test for its qualities, that space is no longer empty.
This new proposition enabled Einstein to eliminate all problems related to the interactions
between aether and ordinary matter. No such interactions exist. All interactions involve ordinary
matter. Hence, one need not explain the previously hypothesized interactions. The question itself
was no longer valid.
Einstein also eliminated the previous requirement that all events be described using terms
and concepts that were recognizably mechanical. Events could be explained in terms of time,
space, and of interactions involving transitions from one inertial perspective to another, the two
perspectives being in motion relative to each other.
In arguing that qualities could be assigned to otherwise empty space, Einstein had created
a fundamentally new proposition. The nature of this proposition was better suited to a medieval
debating society than to sound scientific reasoning. By its very nature, the proposition could not
be proven. Furthermore, it suggested that related propositions might also be beyond proof.
If we are to base theory on mathematically necessary and empirically verifiable premises,
then we cannot base theory on Einsteinian propositions any more than we can base theory on the
aethereal hypothesis. The real value of Einsteinian thought lies in the questions that he elevated to
primacy, and in the limits that those questions place upon empirical verification. He pointed us
toward what well be the most important of all questions, how conservation takes place when one
transits from one inertial perspective to another inertial perspective. He may not have told us what
the real answers are, but has certainly told us where not to look for them.
Having freed physical theory from two assumptions that he regarded as unnecessary,
Einstein then went on to create new conceptual problems that had not previously existed. If often
appears that the history of science has involved the relocation rather than the elimination of
stumbling blocks. Einstein removed two stumbling blocks, only to later discover that he had
erected new ones elsewhere.
As has been previously noted in this book, eliminating the aether eliminated the one
physical perspective to which we could reasonably anchor a three dimensional coordinate system.
Henceforth, any coordinate system that we might adopt would have to be adopted on a wholly
arbitrary basis. We could establish no correlation at all between any three dimensional coordinate
system and the dictates of physical reality.
The three dimensional coordinate system became an entity well suited for neomedieval
debate. How many angels can stand on the head of a pin? The logical answer to this question
depends upon the body or perspective into which one sticks the pin. In the case of the wholly
empty and relative spaces of Einsteinian thought, one could also argue the issue as to whether it is
angels standing on the pin, or a pin being held by angels.

69
In discarding the aether, Einstein also discarded the only derivation ever achieved for the
inverse squared law of the intensity of the field experience. This derivation had been for
electromagnetic fields only, not for gravitational fields. As far back as Newton, scientists had
tried to derive the inverse squared law of gravitation from something more fundamental than itself,
but without success.
With aether gone, all that remained were time and distance. Einstein then attempted a
derivation of his own. Having assigned qualities to otherwise empty space, he found himself
facing the same problem that Maxwell had encountered when he assigned qualities to material
points within the aether: The concept of a qualitative space could be used to derive either
gravitational or electromagnetic field strength, but not both simultaneously. This was because
physical reality assigned varying ratios of one field to the other at the various points of physical
existence.
Einstein claimed that the laws of pure gravitational field are more directly linked with
the idea of general relativity than the laws for fields of a general kind.
37
Because of this linkage,
one could use general relativity to derive a law for gravitation that was not similarly derivable for
electromagnetism. One could pick one, and exclude the other.
Einstein argued that it was possible to derive gravitational field strength from qualities
assigned to points in space. These points were not assigned material elasticity as in Maxwells
derivation, but rather varying qualities of the units of time and distance. As one moved from one
point to another, experiencing a changing degree of gravitational attraction, one would experience
changing values of the units of time and distance. This change would result in a change in the
degree of field attraction.
He was now engaged in circular reasoning. Gravitation and the units of time and distance
were inextricably linked to each other. Which assumed priority was a chicken-versus-egg kind of
question. If the gravitational field was to be explained by reference to altered values of the units
of time and distance, then it should not be possible for time and space themselves to exist in the
absence of a gravitational field. Einstein was intellectually consistent on this matter. This was
exactly what he claimed.
In general relativity, Einstein claimed that this concept was successor to Descartes
argument that a true void could never exist. Einstein argued that the strength of the gravitational
field assigned values to the units of time and distance present at all points within it. This being the
case, then time and distance could have no values at all in the absence of a gravitational field.

There is no such thing as an empty space, i.e. a space without field. Space-time does not
claim existence on its own, but only as a structural quality of the field. Thus Descartes
was not so far from the truth when be believed he must exclude the existence of an empty
space. It requires the idea of the field as a representative of reality, in combination
with the general principle of relativity, to show the true kernel of Descartes idea; there
exists no space empty of field.
38


This meant that the Newtonian claim that space and time existed independently of matter
could not be correct. One could no longer state, as Newton did, that space existed without
relation to anything external. One could no longer state, as Einstein suggested Newton would
have stated, that if matter were to disappear, space and time alone would remain behind [as a
kind of stage for physical happening].
39


70
We have now come full circle. We began with Descartes claim regarding the
nonexistence of a true void, a statement that involved a continuation of ancient thought. We have
returned to Descartes.
Have we closed a perfectly logical circle? Or are we instead running in circles? If we are
running in circles, then how can we escape? What is it that yokes us to the center of this circle?
At the center of the circle, one finds a single statement: Field is a phenomenon that
involves essence, and that assigns this essence to points in space. With regard to this statement, it
ultimately makes very little difference if one accepts the aethereal hypothesis or Einsteins denial
of it. In either case, one is making a statement that cannot be subjected to empirical verification.
In either case, one is also accepting a second statement, a stumbling block that prevents
us from fully exploring the relationship between gravitation and electromagnetism: One cannot
derive the inverse squared law of both gravitational and electromagnetic fields from any single
statement that attributes qualities to points in space. I contend that one cannot hope to fully
understand the practical relationship between the two kinds of field until one can find a common
derivation.
What was really ironic about this debate concerning the nature of space was this: The
debate was driven by two phenomena, field and light, that were themselves defined in linear rather
than spatial terms. The inverse squared law of field strength was itself a linear equation
correlating field strength with mutual distance. By nature, it is a vector. Vectors are linear.
Similarly, light is linearly propagated from its source to its receptor. Mathematically, there is no
mention of space. Our mathematical statements regarding both field and light are linear in nature.
We have then superimposed these linear mathematical statements upon a three dimensional
coordinate system, from which we believe them to be inseparable.
It we reject the premise that field assigns qualities to points in space, then we cannot use
Maxwells field equations or Einsteins general relativity to explain field. We must seek an
entirely new explanation. Our mathematics must be linear rather than spatial. The questions that
we must seek to answer cannot concern space, but only line, distance, mutual velocity, and the rate
of change in mutual velocity. The inverse squared law itself says nothing whatsoever about space.
The law itself only discusses distance, correlating distance with the rate of mutual
acceleration/deceleration.
If we discard belief in the existence of an essence attributable to space, then we are left
with a single remaining issue: experience. Coincidentally, this one remaining issue involves
everything that we can experimentally test. What mathematical relationships govern our
experience of the phenomenon that we have called field?
We need only concern ourselves with the correlation between distance and the intensity
of the attractive or repulsive experience. The principle of qualitative transformation, previously
used to derive both the principle of thermodynamic transformation and the formula for kinetic
energy, also correctly predicts this correlation as well.
In using qualitative transformation to derive the inverse squared law of the experience of
field strength, we share the single question that served as the conceptual point of beginning for
Einsteins theories regarding gravitational fields. We want to know what happens when
information transits from one inertial perspective to another inertial perspective. We will start
down the path that he took, but then deviate quickly into a simpler and more direct pattern of
thought.
Qualitative transformation predicts a particular correlation between mutual particulate
distance and the intensity of the experience of information. If this correlation corresponds with

71
known and measured field phenomena, then we can reasonably hypothesize that the law of
gravitational and electromagnetic field strength is the result of qualitative transformation. Having
established a mathematical correlation, it will no longer be necessary for us to credit field with
structure, either spatial or linear. The resulting derivation suggests that the experience of changing
field intensity is actually produced by the changing distance itself, regardless of distance or
whether one is moving toward or away from a particular body. There is a direct cause/effect
relationship between the mutual distance and the experienced intensity of mutual attraction or
repulsion. One does not experience that which already is. One creates that which one
experiences.
The principle of qualitative transformation accurately predicts the inverse squared law of
experienced field strength. It also states that field as we experience it is entirely experiential.
Field as an essence independent of experience does not exist. The principle then goes beyond
anything that Einstein envisioned to predict photon emission and absorbtion, the movement of
light, its received intensity, and its interference patterns at extremely low light levels. Qualitative
transformation unites theory to a far greater degree than either relativity or quantum has been able
in the past. It even uses relativity to predict quantum.

J. DERIVING THE INVERSE SQUARED LAW

Before beginning explanation, it is important to restate the most important premise of the
information flow model: When dealing with fields (but not quantum emission-absorbtion),
information is not retained by the receiving particle. Instead, it is immediately reflected back
toward the particle that shares the particular single dimensional line of interaction. That which is
received is immediately reflected, and at the same intensity with which it is received. Any
alteration in received intensity necessarily and immediately alters the intensity of reflected
information.
Both the formula for kinetic energy and the inverse squared laws can be derived from the
principle of qualitative transformation. The principle of qualitative transformation states that
information cannot transit from one inertial perspective to another inertial perspective without
altering the mutual velocity of the sending and receiving particles. If two particles remain at the
same distance from each other, then they have a mutual velocity of zero. They share the same
inertial perspective. When information transits from one particle to the other within the same
inertial perspective, no transformation takes place.
The inverse squared laws differ from kinetic energy in this one respect: Kinetic energy
can exist only when there is mutual velocity. The inverse squared laws (adjusted for mutual
velocity) also apply when there is mutual velocity, but do not require the existence of mutual
velocity. Conceptually, the inverse squared laws deal with the nontransformational sharing of the
same inertial perspective in other words, with situations of zero mutual velocity. As Einstein
correctly noted, the inverse squared laws are strictly accurate only when there is zero mutual
velocity.
If we are dealing with zero mutual velocity, then how can we apply the principle of
qualitative transformation? Qualitative transformation can only be applied where there is mutual
velocity, hence two different inertial perspectives. This question is answered by another question.
How do we get from one distance to another? The inverse squared laws correlate experienced
field strength with distance. They would be meaningless if we didnt move from one distance to

72
another. We get from one distance to another by accelerating, thereby creating mutual velocity.
This can be any mutual velocity. We then decelerate back to zero mutual velocity, once again
sharing the same inertial perspective, but at some different distance. Qualitative transformation
takes place while we are in transit from one distance to another.
To illustrate the principle that follows, lets first put the arguments into the language of
baseball. Ill throw you the ball while you are running away from me. At what distance will you
catch the ball if I throw it now? How will your own velocity affect the velocity at which the ball
will hit your mitt? Because the ball will be hitting your mitt more slowly than if you stood still,
then wont your velocity expand the time that the actual impact will take?
Its really your choice. You dont have to run away from me. You can run toward me if
you like. In either instance, the direction and the velocity at which you run will determine your
distance from me at the time you catch the ball, and both the total momentum with which the ball
will impact your mitt and the expansion or compression of time the actual impact will require.
The nature of this impact will consist of both the total momentum with which the ball
impacts your mitt and the duration of the impact, the time that your mitt will have to absorb the
momentum of impact. Note that there are two phenomena present. One is the total momentum of
impact. The second is the intensity of impact. The faster the ball impacts your mitt, the greater
the intensity of that impact.
In a conceptual sense, experienced field strength corresponds with the intensity of impact,
not the momentum of impact, as will be explained below. Qualitative transformation of field
information corresponds with the total momentum of impact, not its intensity.
There is a definite, mathematically definable relationship between velocity and the
intensity of impact. The time the impact takes is inversely proportional to the velocity. At 90
miles per hour, a baseball impacts the mitt in half the time as at 45 miles per hour. At 90 miles per
hour, the total momentum of impact is double that of 45 miles per hour. Therefore, the intensity
of impact is four times as great, not twice as great. The intensity of impact is a function of the
square of the velocity of impact.
This is explanation by analogy. I have chosen to use an explanatory analogy that is not
literally correct, but that will help the reader visualize the nature of the discussion that follows.
Note that my analogy confuses momentum (F=MV) with kinetic energy. As applied to catching
the baseball, the proper concept to apply would be kinetic energy, not momentum. However, the
proper concept to apply to informational interactions involving two particles is the momentum of
the information, not energy. This is because the concept of mass, which is essential to kinetic
energy, cannot be applied. The momentum of the information can be described as the rate at
which the information is being received, times the velocity of reception. However, the concept of
momentum is not sufficient in and of itself.
Let us return to the assumption that attractive and/or repulsive information passes back
and forth along a line connecting two particles. Any mutual velocity will necessarily result in
qualitative transformation. Field strength is the intensity with which a particular body is
experiencing the information that it is receiving at any given time. Intensity is a product of the
rate at which information is being received, which is itself altered by mutual velocity, times the
velocity with which the information is impacting. Velocity is therefore factored into the equation
twice, not just once. This produces a squared relationship.
Any mutual velocity that the particles possess, either toward or away from each other,
will necessarily alter the intensity of the experienced field. Each particle will then immediately
reflect experienced field with exactly the same intensity that it has experienced.

73
The total quantity of information flowing back and forth between the two particles is
inversely proportional to the distance between the particles. This relationship is the direct result
of qualitative transformation. Doubling the distance halves the total quantity of information that is
flowing. Halving the distance doubles the quantity of information. The time required for the
information to travel back and forth is also proportional to this same distance. The rate at which
the total quantity of flowing information is experienced is therefore inversely proportional to the
distance between the two particles. The two inverses must be multiplied together to produce the
intensity with which field information is experienced. Consequently, the intensity with which the
information is experienced is inversely proportional to the square of the distance. This is the
inverse squared law of gravitational and electromagnetic fields.
If the field information is repulsive, then the relationship is like that previously explained
in thermodynamic transformations. If the particles are moving away from each other, then field
information must be transformed into mutual velocity, increasing mutual velocity. If the particles
are moving toward each other, then mutual velocity must be transformed into field information,
reducing mutual velocity. In either case, the effect of the field information is repulsive. This is
the relationship that takes place within the cylinder of any steam engine or internal combustion
engine. This is the principle of thermodynamic transformation. This is also the principle of the
electric motor/generator, which transforms back and forth between like-charged repulsive
electromagnetic fields and mechanical motion.
If the field information is attractive, exactly the opposite takes place. If the particles are
moving away from each other, then field information must be transformed into potential
(potentially kinetic) energy. This is because mutual attraction slows mutual velocity, which means
that mutual velocity must be converted into something else. If the particles are moving toward
each other, then mutual attraction will cause mutual velocity to increase. This increase is
accompanied by an increase in kinetic energy. Potential energy is transformed into kinetic energy,
a phenomenon that we have long understood. This latter transformation is accompanied by an
increase in the quantity of attractive information, which is experienced as stronger field attraction.
As one gets closer to the earth, gravitational attraction increases. This is the principle of
gravitational attraction as first stated by Newton.
Consequently, the conservation principle that underlies the operation of the water wheel
is exactly the same as the principle that underlies the operation of the steam engine. There are two
differences in the application of the principle. First, operation of the water wheel involves a
gravitational transformation rather than an electromagnetic transformation. Second, the water
wheel transforms attractive rather than repulsive information.
Let us now derive a mathematical formula that will describe a single transformational
event, one passage from a particle to a second particle. Practical events compound this formula
very many times to produce practical results.
Within any wholly self-contained relationship, three factors affect the experience of field
strength. The first is an actual increase or reduction in the total quantity of information ( I )
flowing along the single dimensional perspective that unites the two particles. The total quantity
is inversely proportional to the mutual distance between the particles.
The second factor is any increase or decrease in the rate ( R ) at which this total quantity
of information ( I ) is experienced. The instantaneous rate at which the information present in the
relationship is experienced is also inversely proportional to the mutual distance between the
particles.

74
V C
C
T T

=
1 2
2
D
1
I
1
D
V C
D
D

=
1
2
2
I
1
I
1
I
2
I
C
V C
I I

=
1 2
The third factor is the velocity of reception at which this information is received. Mutual
particulate velocity alters the velocity of reception. Alteration of the velocity of reception as a
result of mutual particulate velocity is what produces the transformations back and forth between
information and either kinetic or potential energy.
As with thermodynamic transformations (between heat and mechanical force), this
transformation is the product of relative inertial motion and of the transiting of field information
back and forth between two inertial perspectives. Transformation is directly derivable from the
conservation principle.
The first two factors both produce results that are inversely proportional to the mutual
distance between the particles. The effects of these two factors must be multiplied together ( IR.)
to produce the net result. The two relationships, both inversely proportional to the distance
between the two particles, multiplied together produce the inverse squared law of field strength.
Two particles are moving away from each other at a mutual velocity of ( V ). In the
formulas that follow, velocity V is a positive number if the particles are moving away from each
other, and negative if the particles are moving toward each other. A unit of

field information with attractive power (negative if field information is repulsive) departs one

particle for the other. The two particles are located distance apart. Assuming that the

distance between the particles is changing at a constant rate, the emission will impact the other

particle when this second particle is located at a distance of , the


formula for which is , C being the velocity of light.

The total attractive power of impact will be affected by the mutual velocity of the

emitting and receiving particles. The total power of impact must be adjusted by

multiplying the power at emission by an adjustment factor. This adjustment is
. The difference between and is information that

is being transformed into potential energy (kinetic energy if the field information is repulsive).
At the same time that the total quantity of information flowing between the particles is
changing, the portion of that quantity being instantaneously experienced by the particles is
changing as well. The total time that will be required to experience the received information will

be increased or decreased by the formula . This means that the instantaneous

intensity of the field as experienced by the particle will be altered by a factor that is the inverse of
the increased or decreased time required to experience the received information. The

75
2 2
R I
C
V C
R R

=
1 2
2
I
2
1 1 2 2


=
C
V C
R I R I

instantaneous intensity of the field information as experienced by the receiving particle is



altered by the factor , which is the same factor that has been applied to the

total power of impact .

To obtain the instantaneous experience of field strength, an experience of the rate at
which information is being received rather than the total quantity of information flowing between
the particles, the change in the total quantity of information that results from qualitative
transformation must be multiplied by the expanded or reduced period of time required to receive

that information. . The result is the inverse squared relationship as


derived directly from the conservation principle. Practical transformations require many
compounding events that apply this formula.
This simple derivation makes one assumption that violates the principle of qualitative
transformation itself. I assumed that transformation takes place while the mutual velocity of the
two bodies remains unchanged. The principle of qualitative transformation states that this is
impossible where attractive or repulsive information is being continually emitted and received
between two particles, where mutual velocity exists, and where these particles are interacting only
between themselves. In the presence of mutual velocity unaltered by the application of external
forces, mutual movement must be accompanied by changing mutual velocity.
This means that the mutual velocity of the two particles must be changing while any unit
of information is in transit between the two particles. The result of this change is that the inverse
squared law of field strength will not be experienced to be strictly true.
If the two bodies are accelerating toward each other or decelerating away from each
other, then the degree of transformation per unit of mutual distance will be experienced as being
slightly greater at any mutual distance than would be the case had mutual velocity remained
constant. If the two bodies are decelerating toward each other or accelerating away from each
other, then the degree of transformation will be experienced as being slightly less.
These effects will increase cumulatively until a maximum mutual velocity is reached,
either toward or away from each other, and will then decline back to zero as mutual velocity is
reduced to zero. However, this process produces no net gain or loss of information as one goes
from zero mutual velocity to maximum and back to zero. This would, however, affect an eliptical
orbit, which means that the principle of qualitative transformation not only predicts the inverse
squared law, but also the motion of the perihelion of Mercurys orbit as predicted by Einsteins
theory of special relativity.

76
Using special relativity, however, one must apply the Lorentz transformation to alter the
values of the units of time and distance. This must be done because special relativity accepts field
theorys attribution of field qualities to otherwise empty points in space. The numerical values of
the qualities of particular points in space remain constant without regard to either the occupancy or
the vacancy of the particular points, or the velocity or direction of movement of a body through
any given point.
Under special relativity, the numbers of units of time and distance remain constant, but
the values of those units are adjusted to compensate for the effects of mutual velocity. It is not
possible to have units of time and distance whose values are absolute constants.
If one refuses to attribute qualities to otherwise empty points in space because the
existence of such qualities cannot be proven, then one need no longer apply the Lorentz
transformation to alter the values of the units of time and distance. We can return to the classical,
pre-Einsteinian conceptual world in which the values of the units of time and distance always have
constant values regardless of mutual velocity.
We can argue instead that the direction of motion and the degree of mutual velocity will
affect the strength of the field as it is experienced at any given distance. If we are moving toward
a body, then we will experience field intensity at any given distance as being greater than we
would if we were not moving toward the body. If we are moving away from the body, then we
will experience field intensity as being less. This is because we have experienced acceleration.
Any change from zero mutual velocity to some nonzero mutual velocity must involve acceleration,
thereby altering the gradient of our changing field experience.
Einsteinian relativity has complicated our understanding of reality by multiplying a
conceptual error times its inverse. The error is the assumption that field attributes constant values
to particular points in space. The inverse is the alteration of the values of the units by which such
constancy is measured, using the Lorentz transformation to compensate for mutual velocity.
The product of such multiplication is unity (the number one), which means that formulas
employing canceling errors produce correct results. The problems that result from including
canceling errors are those of interpretation. A formula that contains canceling errors makes two
unnecessary statements, both of which must be considered when interpreting the meaning of the
formula. Nothing about our beings or our understandings deprives God of the right to engage in
such mathematical relationships. We can only hope that God has not chosen to do so.
Adding additional, unnecessary statements makes it more difficult to perceive what the
formula actually means. We have mathematical formulas that predict correctly, but we fail to see
what these formulas mean. We fail to perceive the fundamental simplicity of the universe at
least the simplicity that we hope is there.

K. PREDICTING THE PHOTON

What exactly is light? Is it a wave motion moving through the aether like sound waves
move through air, as Maxwell hypothesized? Or is it a wavelike phenomenon moving through
empty space? Or is it instead a particle that moves through space, interacting with particles from
time to time along its path? Or is light a synthesis of particle and wave? Or?
Historically, there have been two possible languages that we could use to describe
photons. These were the language of wave and the language of particle. As Einstein noted in a
general history he coauthored with Leopold Infeld:

77

There seems no likelihood of forming a consistent description of the phenomena of light
by choice of only one of the two possible languages. It seems as though we must use
sometimes one theory and sometimes the other, while at times we may use either. We are
faced with a new kind of difficulty. We have two contradictory pictures of reality;
separately neither of them fully explain the phenomena of light, but together they do!
40


It is worth noting that both of these languages are actually analogies, and both are of
mechanical origin. According to particle-wave theory, light is a wave that often acts like a
particle, or a particle that possesses wavelike qualities. The problem with such an explanation is
that it really doesnt fully explain the phenomena of light. As the discussion that follows will
show, there are still a number of unanswered questions for which we have no hypothesis at all. If
we attempt to answer these questions using either language, our results are paradoxical. Paradox
is better suited to theology than to science. At present, we either accept paradox or we are unable
to find any plausible explanations as all.
Our contemporary concept of light is paradoxical, a statement that is seemingly
contradictory or opposed to common sense and yet perhaps true.
41
Paradoxical statements
confront science with a choice. Science can either accept the alleged paradox as valid and then
attempt to build theory based upon something that appears self-contradictory, or science can
attempt to eliminate the alleged paradox in favor of some internally consistent statement.
When science attempts the latter, it risks discovering that perhaps neither statement is
really true. There may be a third possible language, a language that is not mechanical in either its
origin or its nature.
Thus far, this chapter has resolved the problem of the common derivation of the inverse
squared law of field strength by arguing that all past theories made a single mistake: They erred
by attributing essence to points within the field. There is an alternative hypothesis: The
experience of changing field intensity is instead an experience of changing informational intensity,
the change being produced by qualitative transformation. It is the mutual movement itself that
produces the changing experience, not a change in position within the field that produces the
experience. Previous theory assumed that field structure must exist independently of our
experiencing of it. The alternative theory denies that field structure, as previously understood,
exists.
My search for this new theory was driven by a perceived need to find a single, common
derivation for the inverse squared laws of gravitational and electromagnetic field strength. Any
old assumption that stood in the way of derivational commonality had to be challenged and
ultimately discarded.
A similar situation exists with regard to the paradoxical particle/wave theory of light.
The two are not only impossible to reconcile with each other, but also with the nature of the field
experience. Maxwell had sought to reconcile the theories of field and radiation into a single
theory using the aethereal hypothesis. Although the aethereal hypothesis has since been discarded,
the search for a common theory that embraces both field and radiation remains imperative.
In fact, the search for a common theory has in recent decades become more imperative
than ever before. According to Neils Bohrs theory of quantum emission and absorbtion of
photons, a shift by an electron from one orbital energy level to another will emit or absorb a
photon. What this means is that light and matter are more closely and inextricably related than
ever before.

78
What do the theories of matter and radiation have in common that would allow them to
explain the specific principle by which such emission and absorbtion takes place? If we ever find
a simple statement of principle by which to explain the actual acts of emission and absorbtion,
then this statement will tell us much about the nature of the photon itself. It should allow us to
reconcile the theories of field and radiation into a single statement.
I propose such a reconciliation. This reconciliation makes the same claim that I have
already made about the experience of field: An experience of changing field intensity is an
experience of transformational change, not something to which essence in the sense of structure
can be assigned.
A photon is merely a particular way of experiencing a transformational change in field
intensity. Because this is the case, the concepts of field and radiation cannot be separated from
each other. Field and radiation are, in fact, exactly the same phenomenon.

L. PRINCIPLE OF PHOTON EMISSION

Thus far in this book, it has proven possible to reconcile the inverse squared laws of the
intensities of gravitational and electromagnetic fields. However, this reconciliation has left the
question of a second possible reconciliation unresolved. How do gravitation and
electromagnetism share the concept of momentum, Newtons third law? We know that
gravitational momentum is conserved, but that electromagnetic momentum is not. What does this
difference mean? Is this difference functionally relevant, and in what way?
We know that the conservation of momentum applies to events involving mass, in other
words, to gravitational fields. The conservation of momentum, Newtons third law, has long been
accepted as being self-evident. For every action, there must be an equal and opposite reaction.
One cannot push body A unless one has some other body against which to push. Body A
cannot be pushed unless body B exists. One can place ones hands against body A and ones feet
against body B, and push using both arm and leg muscles. What Newtons third law states is that
one must move both equally, and in opposite directions. This is intuitively obvious.
Both bodies are moved relative to some central position referred to as the center of mass.
This center of mass is like the pivot point on a teeter totter. Relative to this central position, the
two movements are of equal magnitude. The less massive body is moved further from the center
of mass than is the more massive body, much as one might balance the two masses by placing
them at differing distances from the central pivot point of a teeter totter. The mass of body A
times the velocity component assigned to it will equal the mass of body B times the velocity
component assigned to it.
To illustrate this division of mutual velocity, I hand you a rifle. The bullet you will fire
weights no more than a very few grams. The rifle itself weights three to four kilograms. To the
weight of the rifle you then add your own weight, perhaps 70 kilograms. Your weight combined
with that of the rifle is perhaps 6,000 times the weight of the bullet.
Momentum will be conserved when you fire the bullet. The bullet departs the muzzle
traveling 600 meters per second. The recoil propels you backwards at 10 centimeters per second.
The momentum that the exploding powder has imparted to you and the rifle equals that imparted
to the bullet.
Now, let us observe this event from the perspective of some third location, a perspective
other than that of either you or the bullet. This third perspective will not be affected by the

79
explosion. What qualitative, gravitational transformations will I experience as a result of the
explosion? Note that I must experience two gravitational transformations. One transformation
will be between me and the rifle/shooter. The other will be between me and the bullet.
I cannot experience a single transformation by itself. I must experience both
transformations simultaneously. These two will be equal and in opposing directions. Therefore, I
experience no net transformation. The one transformation in the one direction equals the second
transformation in exactly the opposite direction. This is because gravitational momentum is being
conserved. Therefore, I experience no net transformation at all. The two transformations
mathematically cancel by adding equal and opposing vectors.
Thus far, this discussion has dealt with only a single kind of field, the gravitational. Now
let us add a second kind of field to our discussion, the electromagnetic. Bodies that possess one
must also possess the other. Could any atom exist without both protons and electrons? Let us
now see if we can simultaneously apply the self-evident principle of the conservation of
momentum to both kinds of field. What we quickly discover is that the principle of the
conservation of momentum is not as self-evident as we had thought.
I will refer to gravitational events as having mass and electromagnetic events as having
charge. The particles involved in these events possess both mass and charge, as do protons and
electrons.
If the mass/charge ratios of the two particles are equal, then both gravitational and
electromagnetic momentum will be conserved. This is because it will make no difference whether
one chooses to base ones analysis on mass or charge. If the ratios are 3/3 and 1/1, the ratio of one
body to the other will be 3/1 regardless of whether chooses mass or charge. Momentum will
always be conserved relative to the same center, whether we call it the center of mass or the center
of charge. The two centers are the same.
Physical reality is not this simple. Mass/charge ratios are never equal in events that
involve electrons and nuclei. In the nucleus of the atom, the units of mass will always equal and
nearly always exceed the units of charge. In the case of orbiting electrons, however, the units of
charge will always vastly surpass the electrons exceedingly small masses. Although the mass of
the electron is exceedingly small relative to the mass of the proton, both have the same amount of
charge, although the proton is positively charged and the electron is negatively charged.
Let us return to the example of the bullet and the rifle/shooter and assign equal units of
charge to both. We assign 100 units of electromagnetic charge to the bullet, and 100 units of
electromagnetic charge to the rifle/shooter. Both bullet and rifle/shooter retain the masses
previously assigned. Considering charge to be like mass, the new situation resembles one in
which a 100 kilogram man fires a 100 kilogram bullet at high velocity, but experiences very little
recoil. This situation clearly violates common sense. What significance should we assign to this
violation?
In this situation, it is not possible for the momentum of both mass and charge to be
conserved from within the same perspective. The center of mass cannot be the same as the center
of charge. These two perspectives must move relative to each other. We can only retain our
perspective relative to one of the two perspectives. We must choose either the center of mass or
the center of charge. In reality, our perspective has been chosen for us.
Gravitational momentum is conserved in our perspective. This means that our
perspective is the center of mass. Because our perspective is that of the center of mass, we will
experience the center of charge as moving with regard to our perspective as a result of the mutual
transformation involving the two bodies. Consequently, events involving charge will be

80
unbalanced from our perspective. There will be no equal and opposite reaction. There must be
opposing movements, but these opposing movements must always be unequal.
With regard to charge, we will experience a strong transformation as a result of the
movement of the bullet, but only a very small transformation as a result of the opposing movement
of the shooter and his rifle. Consider yourself and the rifle to be like the nucleus, and the bullet
like an electron.
In the past, we have always considered the conservation of momentum to be relevant
only to mass. We have never considered it to be relevant to charge. If gravitational and
electromagnetic fields share a common velocity and a common derivation for their inverse squared
laws of field intensity, then shouldnt the conservation of momentum be equally relevant to both?
How can we justify applying the principle of the conservation to momentum involving mass, but
not to charge?
Mass and charge both share the inverse squared law. There was a single problem that
made it impossible to derive this common inverse squared law from any hypothesis that attributed
qualities to points in space: There are differing ratios of gravitational to electromagnetic field
intensity at the various points in space. This same problem of differing mass/charge ratios still
makes it impossible to simultaneously apply the principle of the conservation of momentum to
both kinds of field. We cannot do so from within any single perspective.
Should we attempt some form of mathematical reconciliation, some paradoxical solution
that will enable us to reconcile the two, or should we seek to find some functionally relevant
phenomenon that should result from this impossibility? I argue the latter. The result: the photon.
What happens when an electron shifts its orbit within an atom? The conservation of
gravitational momentum must apply, but electromagnetic momentum cannot be conserved. This
question leads us to the very foundation of quantum. Why does an orbital shift on the part of an
electron either emit or absorb a photon? What is it that is actually being emitted or absorbed?
I contend that this question can be answered by applying the conservation of momentum
to events that involve mutual linear movements by electrons and nuclei. All movements involving
electrons and nuclei can be defined as being linear in nature because they involve altering the
distance between the nucleus and the electron. This distance is obviously linear. Any such change
in distance will be transformational, altering both the quantity of information flowing between the
two and the intensity with which this information is experienced within the atom.
This transformational event will not be experienced solely within the atom. Any change
in the distance between nucleus and electron will also affect the distances of both nucleus and
electron from all other particles.
Let us apply the principle of qualitative transformation to a situation in which
gravitational momentum is being conserved, as it always is. Before and after the event, we receive
gravitational information from both of the two participating entities, the nucleus and the electron.
When these two entities move linearly relative to each other and momentum is conserved in our
perspective, we experience two equal and opposing transformations.
Because the two transformations are equal and in opposing directions, they cancel out
each other. We experience no net transformations. What this means is that we experience no net
change in gravitational field strength, and hence no gravitational wave as a result of the mutual
linear movement of the two bodies, the nucleus and the electron.
Because gravitational momentum is conserved in our perspective, we can experience no
gravitational waves in our perspective. If we lived in some other perspective in which
gravitational momentum was not conserved, then we would experience gravitational waves. The

81
gravitational waves that Einstein predicted cannot exist in our perspective, but must exist in all
other perspectives.
Because electromagnetic momentum is not conserved in our perspective, we must
experience electromagnetic waves. This is because, from within the gravitational perspective, the
two opposing electromagnetic transformations cannot be experienced as being of equal magnitude.
One transformation exceeds the other, meaning that we experience a net electromagnetic
transformation as a result of an event within the atom.
The existence of the electromagnetic wave is predicted by combining the conservation of
momentum (or lack thereof) with the principle of qualitative transformation, and then considering
the mass/charge ratio of the electron. The mass/charge ratio of the electron emerges as one of the
most important, functionally significant relationships in all of physical reality
This prediction concerning the existence of the photon tells us what the photon is: Like
the experience of changing field strength of which it is a manifestation, the photon is merely a
change in the intensity of an experience. The existence of the photon experience is inseparable
from the experience of field intensity.
In the sense of an essence, the photon is neither particle nor wave. It is the same
experience that one encounters when one moves toward or away from the center of a field. Like
the terms matter and energy, the term photon can be used as a mere matter of linguistic convention
and convenience. The term photon does not denote the existence of an actual essence.
One additional factor affects our experience of photon emission and absorbtion:
Electrons and nuclei have opposing charges. No such opposition exists in gravitation. The
movement of a nucleus toward our position will have the same net effect as the movement of an
electron away from it. Consequently, any change in the linear distance between the electron and
the nucleus will result in a situation in which the movements of the two will have additive rather
than canceling effect, augmenting the wave rather than canceling out its existence.
However, movement of the nucleus relative to our position will produce very little
emission. Nearly all emission will result from the movement of the electron relative to our
position. This is because gravitational momentum is conserved. Because the electron is
extremely light, we experience nearly all of the movement as taking place on the part of the
electron.

M. PHOTON FREQUENCY

We should experience photons as having frequency because electrons orbit their nuclei
relative to our perspective. Because the orbital shift is not instantaneous, the electron will be
proceeding around the nucleus as it changes its orbital distance. Hence, this orbital motion gets
incorporated into the net transformation that we experience. The frequency of the photon is a
function of the electrons orbital rate. In a broader sense, we experience electromagnetic events as
having frequency and being quantized, whereas we experience gravitational events as lacking
frequency and being nonquantized.
To illustrate the reason for this, let us take a hydrogen atom consisting of one electron
and a nucleus consisting of one proton. Applying Newtons orbital mechanics, we note that the
electron does not actually orbit around the nucleus. Rather, both the electron and the nucleus orbit
around a common point, their center of mass. This center of mass is a point located along a line

82
connecting the electron and nucleus. Because of their relative masses, this center is located very
close to the nucleus itself.
From our perspective, one at which the center of mass is effectively at rest, we perceive
the electron as orbiting around the nucleus. While the nucleus also moves, we note this movement
as being more like a vibration or a wobble than an orbit. However, these orbits are equal but
opposite from the perspective of mass. The two gravitational transformations that we experience
as a result of the two orbits are equal and opposite, and therefore cancel.
We experience no net qualitative transformation, and hence no gravitational waves as a
result of the orbital relationship. If the mutual distance between the electron and the nucleus was
to change, then the center of mass would move as well, restoring balance to the relationship.
We would experience no net gravitational transformation as a result of any change in the
orbital relationship unless the change altered the distance of the entire atom from us. No change in
the orbital relationship would in any way impart the frequency of the orbit to any gravitational
relationship that the atom has with other particles. The frequency of the orbit could never be
experienced.
Such would not be the case with electromagnetic relationships. Because momentum is
not conserved with regard to charge, then the frequency of the orbit will be experienced as
changes in electromagnetic field strength, as photons. This suggests that we should always
experience the emission and absorbtion of photons, even when there is no change in the orbital
relationship between the electron and the nucleus. We know that this isnt the case.
The best explanation I can offer is that when orbital relationships involving a group of
atoms remain stable, then we experience all such orbital transformations as canceling each other.
To avoid such cancellation, something must happen that is clearly out of the ordinary with regard
to this usual stability. We experience net changes only when this stability is disrupted, that is,
when atoms randomly experience changes in the orbital relationships of their electrons. The fifth
chapter will further elaborate on what this orbital change involves. For now, however, it is worth
noting that solving this problem will undoubtedly involve extensive use of supercomputers for
mathematical simulation. The relationships and formulas proposed in this book can be used as a
basis for such simulation.
Einsteins theory of the photon correlated photon frequency with its energy level rather
than with the electrons rate of orbital rotation. According to his theory, the energy level of the
photon is proportional to its frequency. He then cited the photoelectric effect as validating this
relationship.
When certain metals are placed in a vacuum and then exposed to light of certain
frequencies, electrons are emitted into the vacuum from the surface of the metal. Einstein
reasoned that these electrons were being emitted because photons were impacting them with
sufficient energy as to knock them out of their orbits.
He noted that this photoelectric emission would only take place when the frequencies of
the photons exceeded certain minimums. Below these minimums, there is no emission. He
reasoned that this was because photons of lower frequencies lacked sufficient energy.
According to Einsteins theory, there should be no electron emissions below some
particular frequency level, a level that varies according to the metal chosen. Above this threshold
level, the intensity of emission should increase as the frequency of the light increases. If
diagramed on a graph, the relationship between photon frequency and electron emission should
show a positive correlation.

83
If my explanation of the source of photon frequency is correct, then there should be two
kinds of photoelectric effect, the resonant and the accelerative. The resonant should be by far the
rarer, but by far the more intense. Resonant emission takes place when the received photons arrive
in phase with each other, and their frequency corresponds with the frequency of the orbiting
electron. The resonance that results should have effects analogous to those of an army marching
in step across a bridge that is resonating at the same frequency as the marching feet that are
striking it.
Resonance should produce very intense emissions. There should be no emissions below
the resonant frequency. Once photon frequency exceeds resonant frequency, then accelerative
emission should begin to take place. Accelerative emission requires that photon frequency exceed
orbital frequency so that the electrons will be accelerated within their orbit toward escape velocity.
Metallic sodium could be used to test for the existence of resonant emission. Sodium has
an extremely simple, one line spectrum. There should be no photoelectric effect until the
frequency of the light reaches the spectral absorbtion line of the sodium. At the frequency of this
line, the light should produce very intense, resonant emission. As the frequency of the light rises
above the frequency of this line, photoelectric emission should drop to a much lower level.
Resonant emission will be replaced by accelerative emission. As the frequency increases further,
accelerative emission should increase, although it may disappear completely at twice the
frequency of the spectral line before resuming once again.
If resonant emission exists, then photon frequency must be correlated with the rate of
rotation of the emitting electron, rather than with any alleged energy level of the photon itself.

N. THE PATH A PHOTON TRAVELS

Light is electromagnetic in character, an experience of momentary changes in the
intensity of received electromagnetic information. This means that it must therefore share single
dimensional perspectives with electromagnetic fields. What this means is that neither field nor
radiation can be spatially continuous. Both the experience of field and the propagation of light
must be defined in single dimensional terms.
As it also the case with field, it is impossible to prove that any wavefront by which light
is propagated is spatially continuous. The wavefront can only be intercepted at certain points.
These are the points that are occupied by particles. We can say that a particle occupying a certain
point experiences the wavefront. We cannot say that an empty point located in close proximity to
the particle also experiences the wavefront. In the absence of a particle, there is no experience.
According to the single dimensional premise, information must travel from particle to
particle. No information can travel back and forth, in either direction, between a particle and a
point that is not occupied by a particle. Consequently, empty points are not relevant to the
propagation of light. Only particles are relevant.
This statement resolves the problem created by Ernest Rutherfords discovery in 1911
that the atom is nearly empty. Rutherford fired alpha particles (helium nuclei) at a piece of gold
leaf that was opaque to the human eye. He discovered that nearly all of the alpha particles passed
through the gold leaf unimpeded. A few were deflected. Rutherford therefore concluded that the
atom was nearly empty. Note that particles pass through while light does not.
The empty atom poses a serious challenge for both the classical theories of light, both
particle and wave. If light is a classical particle, then shouldnt its particles pass through the gold

84
leaf as readily as to alpha particles? Why is it that particles of light strike particles of matter,
whereas particles of matter, alpha particles in particular, do not?
In the absence of an aethereal medium through which light could travel, the nearly empty
atom poses a similar problem for the wave theory of light. Maxwells aethereal hypothesis
assigned spatial continuity to the wavefront of light just as it also attributed spatial continuity to
the structure of field. Field assigned qualities to all points in space. A wavefront of light was
spatially continuous as it passed through those same points. How could spatially discontinuous
matter consisting of only a few particles here and there intercept and interact with a spatially
continuous wavefront?
In the cases of both the particle and wave theories of light, we have assumed that the
propagation of light is spatial rather than a strictly linear, particle-to-particle phenomenon. The
wave theory has assigned spatial continuity of the wave. Any particle theory should assume that
lights shower of particles is mathematically random. Without the existence of probability waves
or other techniques for mathematically manipulating probability, neither can accurately predict the
movement of light.
The particle-wave Einsteinian photon contradicts the terms of its own existence. It is
assumed that light is propagated in spatially continuous wavefronts. When transferred to the
language of particles, this means that the distribution of particles should be purely random. At the
same time, however, it must be noted that the experience of this wavefront is somehow directed
toward or concentrated by the presence of an electron. The movement of light is not random, nor
is it mathematically diffused as spatial continuity would suggest. These difficulties presented
physicists of the 1920s with one of their greatest challenges.
As with field theory, the theory of light that developed after the collapse of the aethereal
hypothesis retained certain three dimensional characteristics that aether obviously mandated, but
for which there was no continuing mandate. There no longer remained any theoretical reason why
the wavefront of light should be spatially continuous. Furthermore, there was no way one could
experimentally test for such continuity.
If one adopts the single dimensional premise, then one can hypothesize the means by
which a wavefront of light is propagated in a nonrandom manner than violates spatial continuity.
Light can progress along the single dimensional perspectives, the lines of interaction that
individually unite electrons with each other. Let us return to the hypothetical sphere we used to
test for the spatial qualities of field. If light is transmitted in a single dimensional manner, then it
will be experienced at a given intensity by each electron that might be located around a given
sphere placed at any specified distance from the source. However, the wavefront would not be
present at the same intensity at other, empty points around that sphere. In fact, the wavefront
would not be present at all.
The single dimensional character of physical reality at its most basic interactional level
directs photons to receiving electrons, and will not allow them to miss electrons. If the electron
moves relative to our perspective while the photon is in transit, as orbiting electrons do, then the
single dimensional perspective connecting the emitting and receiving electron will also move
relative to our perspective. However, we can never state in absolute terms whether this
perspective has moved or not. In any absolute sense, there is neither movement nor nonmovement
of such perspectives. Photons impact electrons because it is impossible for them to miss.
The theory of special relativity has been used to mathematically resolve the differences
between field theory and our experience of field. Quantum has been used to mathematically
resolve the differences between the particle/wave theory of light and our actual experiences with

85
light. If one adopts the single dimensional premise regarding basic interactions, then one need not
employ either. We can dispense with both the Lorentz transformation and the probability waves
of quantum. The solution that reconciles the apparently divergent claims of relativity and
quantum ultimately discards both.

O. THE POSITIONS OF LIGHT SOURCES

Let us hypothesize the existence of a distant galaxy that is moving away from us as two-
tenths of the velocity of light. We determine that this galaxy is located two billion light years
away from us. This means that the light we are now receiving was emitted two billion years ago.
We therefore experience this light as coming from the point where this galaxy was located two
billion years ago. Right?
Wrong! We have no absolute perspective from which we can determine what its relative
position was two billion years ago. We have no perspective to which we can anchor a three
dimensional coordinate system. Therefore, we cannot say that the galaxys relative position has
shifted. We receive all light along single dimensional perspectives that unite the light source with
its recipient, us. This one perspective is absolute unto itself. Its absolute movement or
nonmovement cannot be defined relative to any perspective at all including its own.
Imagine for a moment that we can watch the lateral movements of an atom located two
billion light years away from us. If it moves relative to some arbitrary perspective by which we
can measure such movement, then when should be experience this movement as taking place?
The answer: instantaneously. The single dimensional perspective along which the information
travels shifts instantaneously relative to whatever arbitrary perspective we have adopted as our
benchmark.
Obviously, we cannot watch a single atom from two billion light years away. What this
situation does suggest, however, is that it should be theoretically possible to have nearly
instantaneous communication across the vast reaches of space. It should not require four years for
us to receive a message from a star located four light years away from us.
There are two means by which it should be possible to validate the instantaneous nature
of this relationship. The first involves calculating the shape of the Big Bang. The Big Bang took
place at some theoretically central point and spewed forth galaxies in all directions. Was this
explosion symmetrical?
There are two ways that we can calculate the shape of the Big Bang. The first is to apply
the single dimensional premise as I have argued it in this section. We experience each galaxy as
being located at its present relative azimuth from us. We need not assume that we are receiving its
light that was emitted two billion years ago as coming from the galaxys relative location at the
time of emission. Therefore, we need not calculate its shift in relative azimuth to compensate for
this passage of time.
The second set of computations includes alleged shifts in relative azimuth. We assume
that each galaxys present relative position has shifted from the relative position at which we
presently observe it. The question then becomes a matter of comparing results. Which produces a
Big Bang that is more symmetrical? An error in assumption should produce a particular,
predictable pattern of asymmetry. Which set of computations produces an asymmetry that is
consistent with error in its underlying assumption?

86
If the propagation of light is three dimensional in its nature, then we should receive light
from some relative position that existed at the time of emission. If the propagation of light is
single dimensional, then we should receive light from the present relative position of the emitter.
The second means by which we might be able to validate the instantaneous experience of
relative position would be to observe an exploding supernova in a nearby galaxy. When we first
observe a supernova in a galaxy located 80,000 light years from us, we should observe the light
that came from the first days of the explosion. However, we should observe the size and shape of
the residue that exists 80,000 years after the explosion. If the three dimensional premise is correct,
then the star should still appear as a sharply defined point. If the single dimensional premise is
correct, then the residue should appear as a very brightly lit cloud.

P. INVERSE SQUARED LAW OF THE INTENSITY OF
RECEIVED LIGHT

In addition to the two inverse squared laws of field strength, the gravitational and
electromagnetic, there is a third inverse squared law that the single dimensional premise explains.
This is the inverse squared law of the intensity of received radiation.
The inverse squared law of received radiation has previously been derived from the very
logical implications of three dimensional geometry. To make this derivation from three
dimensional geometry, one must assume that the wavefront of light is spatially continuous. In the
language of particles, particles must be emitted randomly rather than being selectively directed
toward electrons.
Let us start with a single point of light emission, and an ever-expanding sphere of points
that are equidistant from the source of light emission. As this sphere moves away from the source
of light, its points retain their common distance from the source of light. As this sphere moves
away from the source with the emitted light, there should be some predictable mathematical
relationship between distance from the light source and the surface area of the sphere. This
mathematical relationship is one of squaring. The surface area of the sphere at any distance from
the light source is the square of the distance.
As one moves a receiving body toward or away from the light source, the surface area of
the sphere as perceived from the emitter changes by the square of the distance. Light intensity
increases or decreases by the inverse square of the distance. Multiplying the square by the inverse
square produces one. Therefore, the wavefront of light has a continuous value at all distances. It
is therefore logical to assume a cause/effect relationship between the two, the distance from the
light source and the intensity of received light. I contend that the relationship is merely
coincidental, and has nothing whatsoever to do with cause and effect.
Light and electromagnetic fields share the same information, velocity, and lines of
interparticulate interaction. Consequently, the inverse squared law of electromagnetic field
strength also affects the intensity of the photon as it is experienced by the receiving electron. Field
and radiation share a common derivation for their inverse squared law because light is merely a
momentary experience of changing field intensity. The two laws are, in fact, the same law.
If one moves a body toward or away from the emitter of light, there is no change in the
number of photons that is transmitted toward the recipient. This is because there is no change in
the number of lines of interaction connecting the particles of the emitter with those of the

87
recipient. However, the intensity of the field experience at each end of each line is altered in
accordance with the inverse squared law of the strength of field experience. Each electron will
continue to receive each net transformational experience. What is altered by changing mutual
distance is the intensity of each such experience.
Because the strength of the individual photon itself varies in its energy value according to
the inverse squared law of the distance between emitting and receiving electrons, we should not
consider the photon to be a particle. No particle has such a quality.
Because we can explain the propensity of the photon to impact electrons as being a
product of the single dimensional nature of all fundamental interactions, we need no longer couple
probability waves with the particle analogy to explain why photons have a very high propensity to
impact electrons. We can completely eliminate any analogy between photon and physical particle.
Light is clearly more wavelike than particle-like. However, this wavelike nature exists in
only a single dimension. Light is not wavelike in any three dimensional sense.

Q. OPTICS AT EXTREMELY LOW LIGHT INTENSITY

A single, often repeated experiment has frequently been cited by Richard Feynmann and
others to illustrate the conceptual problem that is essential to quantum theory. This experiment
was first conducted during the 1920s to prove the particle-like nature of the photon.
The wavelike nature of light had previously been demonstrated using the two slit
experiment. This experiment involved the use of three parallel planes. Light first passes through a
narrow slit in the first plane. Two slits were cut through the second plane, parallel to each other
and to the first slit. Light that had already passed through the first slit was allowed to pass through
these next two slits. The third plane was a screen onto which the light was then projected.
The light projected on the screen divided itself into alternating light and dark bands.
These bands were products of the constructive and destructive interference of the waves of light.
These patterns of interference demonstrated that light had wavelike qualities.
The experiment that Feynmann and other have cited was intended to prove that light also
has particle-like qualities. Like any other particle, a particle of light emitted as an electron
changes orbit must have some certain quantity of energy. This quantity of energy would be a
quality of the photon itself.
What would happen if one used photographic film as the screen on which to project the
two slit experiment, and limited the light source so that no more than one photons worth of light
could enter the experimental apparatus at any one time? If light was particle-like and no more
than one particle could enter at a time, then the photographic film should show no patterns of
interference, the alternating light and dark bands. This was because the extremely low light level
should force the single photon to choose between the two slits, going through only one. The
experiment required several weeks to sufficiently expose the film so that it could be developed.
The result: The film showed interference patterns, just as if a more intense light source
had been used. It appeared that each photon must have been passing through both slits
simultaneously. These paradoxical results were later explained in quantum theory by the concept
of probability waves, a structure of mathematical thought that allowed reality to deviate from the
predictions that classical mechanics would have made.

88
The question involved in the two slit experiment has practical consequences for everyday
optics. If the two slit experiment had produced the expected results, then it should have told us
something about the optical nature of refraction and reflection.
To function, optical refraction and reflection must maintain coherent patterns of
constructive and destructive interference. If these patterns begin to break down at very low levels
of light intensity, then the optical phenomena themselves will begin to break down. There must be
limits of intensity below which refraction and reflection cannot be used to make reliable
observations. Fortunately, such is not the case.
The reason why this is not the case has already been explained in this chapter. A photon
is an experience of change. It is not a particle to which a particular energy can be assigned.
Consequently, light intensity is irrelevant to the functioning of optical events.
If an electron experiences any change in its relationship with its nucleus, then all other
electrons that can be optically affected by this change will experience the change. The experience
of one photons worth of change within an atom will be divided and subdivided so that all other
electrons that might experience the change will experience the change.
To borrow from scripture, what happens is like the feeding of the four thousand with
seven loaves and a few small fish.

R. THE CONSTANT VELOCITY OF LIGHT EN VACUO

It addition to making claims regarding the nature of interactions involving fields,
Einsteinian relativity made one claim regarding the nature of the velocity of light. This was the
theory of the constant velocity of light en vacuo.
According to this theory, light moving through a vacuum always assumes its velocity
relative to the observer. Relative to the observers perspective, light has a constant velocity
regardless of any mutual velocity that may exist between the light source and the perspective of
the observer.
The law of the constant velocity of light en vacuo posed a serious problem for optical
theory. Special relativity offered nothing that would help to explain the movement of light
through a refractive medium. It did, however, require that some differentiation be made between
optically involved refractive media and the optically noninvolved vacuum. How were we to make
this distinction?
Special relativity suggested that such a distinction must be made, but supplied us with no
evidence regarding the actual nature of refractive processes. Hence, there was no way to predict
which media or situations would be optically involved. With the presence of optical processes
themselves being unpredictable, how was one to differentiate between the spaces that were inert
vacuums and others that were optically involved? Where, exactly, would the law of the constant
velocity of light en vacuo function? Where would it not?
The answer lay in specious and arbitrary distinction. The following quotation from a
book Einstein coauthored with Leopold Infeld shows just how arbitrarily the distinction was made:

Again we start with a few experimental facts. The number just quotes [186,000 miles
per second] concerned the velocity of light en vacuo. Undisturbed, light travels with this
speed through empty space. We can see through an empty glass vessel if we extract the
air from it. We can see planets, stars, nebulae, although the light travels from them to our

89
eyes through empty space. The simple fact that we can see through the vessel whether or
not there is air inside shows us that the presence of air matters very little. For this reason
we can perform optical experiments in an ordinary room with the same effect as if there
were no air. [emphasis added]
42


Interestingly enough, this is exactly what the aethereal hypothesis would have predicted.
Prior to Einstein, scientists assumed this to be the case. Although he discarded aether, Einstein
apparently retained this one assumption that had been rooted in the aethereal hypothesis.
However, as any astronomer or celestial navigator is well aware, the atmosphere is
refractive. According to the theory of refraction, this means that light must be assuming is
velocity relative to the atmosphere rather than relative to the emitting star or galaxy. This being
the case, then all measurements of the velocity taken through the atmosphere must show constancy
of velocity. The results of the famed Morley-Michelson interferometer experiment become easy
to explain.
How far must one depart the surface of the earth before one can assume the presence of
an optically inert vacuum required for the operation of the law of the constant velocity of light en
vacuo? This question is unanswered, and probably unanswerable. There is no clear boundary
between the two kinds of environment.
Any search for such a boundary was further complicated by Ernest Rutherfords
discovery that solid matter is nearly empty. If solid matter is nearly empty, then how could we
differentiate between optically involved media and inert vacuums, even if we were able to draw a
definite dividing line between the two? Shouldnt the law apply just as readily to the movement of
light within the refractive media? After all, light must move through the empty interstitial spaces
within solid matter on its way through the refractive media. The empty interstitial spaces one
finds within a block of lead are, in fact, pure vacuums.
Having discarded Einsteins special relativity as it pertains to matter, that is, to field-field
interactions, we can now return to a much simpler world in which the values of the units of time
and distance never vary. In this simpler world, the velocity of light as experienced by the recipient
should comply with common sense. It should be a function of both the velocity of emission and
the mutual velocity of the emitting and receiving bodies. There is no constancy of experienced
velocity.
However, it initially appears that this statement is contradicted by experimental evidence.
How do we explain the appearance of constancy in experimental situations in which constancy of
results is, in fact, the rule? I propose a simple answer: Experience is passed from electron to
electron by a wavelike process of transformation/reception/transformation/emission, sequentially
involving a large number of electrons. In this wavelike process, experience assumes its velocity
relative to the last electron that has interacted with it, both being acted upon by the altered
informational intensity that it receives and simultaneously acting upon the information that it is
transmitting to other electrons.
Consequently, one cannot interact with light without at the same time re-establishing the
velocity of light as being relative to ones experimental apparatus. Because the velocity of light
re-establishes itself in this manner, experiments involving the velocity of light will show
constancy of velocity.
There should be one experimental exception to this rule. Because of the purity of the
vacuum that this experiment would require, it may not be possible to conduct it: In the case of
reflection, the angle of incidence in a pure vacuum would not equal the angle of reflection if there

90
is a mutual velocity between the light source and the mirror. This is because the velocity of
incidence would differ from the velocity of reflection.
If, however, Einsteins theory of the constant velocity of light en vacuo is correct, then
the two velocities must be equal if there is no mutual velocity between oneself, the observer, and
the mirror. This means that the angle of incidence must equal the angle of reflection.

S. THE THEORY OF REFRACTION

Thus far, this chapter has discussed two rules that govern the movement of light. First,
the velocity of light is, for all practical purposes, a constant without regard for whatever may
occupy a given space. This means that one cannot distinguish between glass refractors and
vacuums. The velocities through both spaces are the same.
Second, light is an experience of change that follows the lines of interaction that unite
electrons with each other. Light does not possess a linearly propagated, spatially continuous
wavefront.
This means that refraction cannot actually slow the movement of light through the
medium. It can only force the experience to follow paths that require greater distance. Think of
light as following a latticelike network of lines of interaction, continually changing direction as it
moves. Light working its way through a refractive medium is more like a mouse working its way
through a maze than it is an arrow aimed straight toward the target.

T. DERIVING ELECTRICITY

This chapter will not attempt to explain electricity. However, I note that my derivation of
the existence of the photon also predicts electricity, and should provide a foundation for such an
explanation. Electricity must exist for the same reason that the photon must exist. There can be
no gravitational equivalent of electricity for exactly the same reason that there can be no
gravitational waves. We live in the gravitational perspective, the one perspective within which
gravitational momentum is conserved.
Electricity is, in fact, exactly the same phenomenon as light. What this means is that both
should move in much the same way, and that the movement of both should be governed by the
same rules. In a conceptual sense at least, this brings optical and electrical technologies much
closer together. As the two merge into a single theory, it may become easier to make practical,
technological use of the electromagnetic radiation that we find in our environment.
Electrical currents differ from light only in their frequencies. The frequencies of
electrical current are far lower. Both experiences can be transmitted as a wave motion from
electron to electron within their materials, following along the single dimensional lines of
interaction that unite electron with electron.
While an electrical current may involve some movement of electrons from atom to atom,
it should be possible to conduct this current from atom to atom without requiring any electron to
depart its atom. What this means is that electrical currents should be like light passing through a
refractive glass lens. In the case of light, there is no minimum light intensity required so that the
patterns of constructive and destructive interference can take place, thereby giving the movement
of light its direction. In the case of an electrical current, there should be no minimum energy level

91
required to induce a current. Currents can exist even though there is insufficient energy in the
current to induce even one electron to depart its atom.
The velocity of the electric current should be the same as the velocity of light. This we
know to be the case when materials become superconducting, all electrical resistance having
ceased. Resistance slows the passage of an electric current. Why?
Random movements among the atoms of a conductor will produce localized, repulsive
transformations back and forth between electromagnetic radiation and kinetic energy. I contend
that light and electricity are exactly the same thing. Consequently, these random emissions and
absorbtions will mix with the electrical current as it passes. Such mixture will do two things.
First, the mixture will cause the passing wave of electrical current to become less
organized. A portion of its energy value will be randomly dispersed, heating the conductor and its
environment. Second, this mixture will slow the passage of the wave. This is because the
directions and velocities produced by random generation and absorbtion of electromagnetic energy
will mix with the moving current.

U. PHYSICS AND ENGINEERING

The concerns of physics are primarily epistemological. How is it that we can know what
we know, or learn whatever it is that we might learn? The concerns of engineering are primarily
practical. How can we manipulate physical reality to accomplish some practical end?
This division of concerns is hardly modern. It predates the modern sciences by at least
two millenia, dating back to the ancient Greeks. The physics of today is far more disciplined than
ancient or medieval physics, and is becoming more disciplined as the centuries pass. However,
this does not mean that physics has given up its older tendency toward epistemological
speculation.
Todays speculations are more disciplined than those of the past, but speculations remain
exactly what they are speculations. Any discipline that can provide shelter and food for
Einsteinian relativity and for Schrodingers cat is guilt of unnecessary speculation.
The questions that physics most needs to ask itself concern the necessity of particular
speculations. We need to codify our needs, the issues that we believe require speculation. As we
codify, we need to spell out in detail everything that we are assuming, every statement we are
making concerning the nature of physical reality. This chapter has attempted a degree of
codification.
No element of any world view should go assumed, unstated, and unquestioned. The
more obvious it appears to be, the more directly and emphatically we need to state it, lest we
overlook its possible error.
The real function of scientific discipline isnt to assume or to speculate, but rather to
challenge assumptions and circumnavigate speculation. In science, no question is more dangerous
or misleading than the one that needs no answer.


92
CHAPTER FOUR:
VIOLATING ENTROPY

We owe it to ourselves to rebel against entropy. It is time that we made every effort to
set aside thermodynamics and the technological limitations that it has dictated. There is no reason
why this archaic pattern of thought should continue to limit our minds.
The Age of Energy has been characterized by a continued but unacknowledged reliance
upon medieval thought. Whether by choice or default, we have continued to rely upon the
traditional creation myth taken in part from scripture, and have employed it as a basis for our
epistemology.
We still like to believe that man can achieve a transcendent understanding of physical
reality, an understanding that will be binding upon all peoples and all cultures. Because of our
egoism, we have taken medieval thought and have projected it far beyond its reasonable limits.
We have continued to place faith in natural law, in our ability to discover it, and in its
binding power over human choice. We have taken one of the earliest of our industrial
technologies, water power, and have conceptually projected its reach both into our own future and
out onto the universe.
As one might reasonably expect, our unacknowledged reliance upon medieval thought
has brought us face to face with a re-emerging medieval problem, an energy crisis. Fossil fuels
did not really solve the medieval shortage, at least not permanently.
In fossil fuels, man had found an aspirin, not a solution. Unfortunately, it was an aspirin
that carried with it the possibility of a severe hangover somewhat later: As nonrenewable
resources became scarce, technologies of the type that had dominated the Middle Ages might once
again become dominant although at a scale that would force severe economic constriction.
Physical theorists once sought to explain the new technology of the steam engine by
comparing it with some medieval technology with which they were already familiar. The
technology they chose was the obvious one, water power. Water power became the great,
dominating archetype. As the literality of the archetype was rejected, the archetype continued to
function by analogy. The kinetic theory of heat was adopted, with water power functioning as its
archetypal guide.
To the usual characteristics of water power, the kinetic theory of heat added the new and
deeply pessimistic doctrine of entropy. Entropy would send us forever searching for new sources
of fuel.
The entropy of the universe would continue to grow as differentiated organization
degraded in undifferentiated, useless chaos. This degradation of useful energy into useless chaotic
energy would take place whether man participated or not. By choosing to participate, all that man
could do would be to accelerate the rate of this degradation.
What a grim fate! This is the world view against which we need to rebel. Although our
technologies have advanced far beyond their medieval predecessors, our patterns of thought have
not advanced to the same degree as have our technologies. We can measure and calculate, but we
cannot manage. We have lacked the understandings of fundamental principles that would enable
us to manage.
We must seek to fill this void by developing new paradigms with which to challenge
older, long-accepted paradigms. This is what I have attempted to do. I suggest that the paradigms

93
of matter/energy, the laws of thermodynamics, the spatial concept of field, Einsteinian relativity,
the particle/wave theory of light, and the probability waves of quantum can all be replaced by a
mere two statements.
These two statements, which I have proposed as the replacing paradigms, are the single
dimensional premise that defines the geometric nature of reality, and the principle of qualitative
transformation, from which matter/energy, light, and thermodynamic transformation can all be
derived. In this chapter, I propose to use these two statements and the concepts that I have derived
from them to challenge the dominion of entropy.
Science is a very human endeavor in which we as mere humans bare our souls before the
universe to learn how the universe will respond. In acknowledging this to be the nature of science,
we must admit that the universe has its own ways of teaching us more about ourselves than we
will ever learn about the universe. The difference between discovery and revelation is not as great
as we often like to believe.
No weapon, neither sword nor pen, can assist us in overturning past limits. This is a task
that we undertake without any weapons whatsoever, for it is we ourselves who are the adversary.
The only stage available to us is that of our humanly created world of symbolism. The sole actor
must be our own imagination.
No rebellion against past limits can succeed unless it is carefully disciplined. If we are to
think and rethink creatively and productively, then we must first adopt and employ a clearly
defined sense of discipline. This sense of discipline must be based first and foremost on self-
understanding. Our self-understanding will then define how it is that we can hope to understand
the world around us.
This sense of self-understanding must be like the finest disciplines that one can find in
the history of religious thought. The purpose of such a discipline is not to emaciate or humiliate
the disciple, but rather to heighten his self-awareness and to deepen his self-understanding. The
purpose is not to separate the disciple from his world, but to make him even more a part of his
world than before. Those who best understand themselves are best prepared to comprehend the
world around them.
Evolution is not a doctrine that deprives us of our spirituality, or that in any way denies
us our right or ability to seek spiritual discipline. Instead, evolution is a doctrine that imposes its
own particular discipline: We as evolved beings must remain forever aware of our evolved past
and of the adaptations that have enabled us to evolve, both physically and culturally. We have
adapted in the past, we can adapt now, and we will adapt in the future.
Nothing about our fate dooms us to a head-on collision with limitations imposed upon us
by our universe.

A. STRATEGIES TO OVERCOME ENTROPY

As energy theory readily acknowledges, there is no shortage of energy in our
environment. With regard to any conceivable scale of human need, we are swimming in a
limitless ocean of energy. There are vast quantities of heat surrounding us. Heat continually
radiates back and forth, from object to object.
Usually, our environment tends toward some ambient temperature, be it warm or cold.
Temperatures tend to equalize. Undifferentiated energies are those that exist where no

94
temperature differential is present. Consequently, most of the heat energies present in our
environment are undifferentiated.
The question that confronts us is not one of supply, but rather one of management. How
can we manage this limitless ocean to obtain all that we need?
According to the dictates of thermodynamics, we can tap heat only where there is a
temperature differential. In the absence of any temperature differential, heat is present in our
environment but is beyond our reach. We cannot productively tap it for any use. To whatever
degree we can tap it, we must expend more energy to tap it than what we can usably obtain.
When discussing the rules of a particular discipline, one should carefully note those
situations, however unachievable they may seem, that would violate the alleged rules. In the case
of thermodynamics, we should catalogue every situation in which we might possibly be able to
violate entropy.
The goal of any strategy to violate entropy must be to tap the undifferentiated energies
present in our environment. In seeking to do so, we must attempt to obtain more usable energy
from our technologies than we expend, thereby exceeding 100 percent efficiency. To obtain
efficiencies greater than 100 percent, we must tap the undifferentiated energies present in our
environment.
Let us begin by first classifying the various strategies by which it may be possible to
overcome entropy. In the broadest sense, there are two categories of strategy. The first is the
nontransformational. The nontransformational includes and is limited to the optical and the
electro-optical. These attempt to passively achieve useful differentiations from the
undifferentiated temperatures of the electromagnetic radiation present in our environment.
For example, there is electromagnetic radiation present in the book that you now hold in
your hands. Heat radiates around the space in which you are reading, regardless of how cold this
space may feel to you. How can we reach into this book and the spaces around you to tap this
energy? How can we tap this energy without expending any fuel, electricity, or mechanical power
to do so?
Managing heat as radiation, it might be possible to achieve passive differentiation. We
might be able to create an optical technology in which refractive materials intensify radiant energy
passing in a particular direction, thus favoring passage of heat in one direction over passage in the
opposing direction.
Such a technology would heat in cold weather by admitting radiant heat from the outside
while reflecting interior heat back into the room. In hot weather, the direction of favored passage
could be reversed to supply air conditioning.
In might also be possible to convert environmental radiation into electrical potential that
can then be transmitted along wires. Instead of having solar cells that work only in the presence of
light, we might have heat cells that supply electrical potential day and night by drawing upon
environmental radiance.
The other category of strategy is the transformational. To violate entropy, any strategy
that involves transformations must draw undifferentiated radiant energy from the environment into
the transformational process. Heat that is drawn from the absolute temperature of the environment
can then be transformed into mechanical power. Tapping the absolute temperature of the
environment in this manner is already supplying the useful output of mechanical refrigeration,
some of the power required by jet aircraft, and could conceivable supply most of the power
required by automobiles.

95
I argue that we have already been drawing undifferentiated radiant energy into
mechanical transformations, although we have not recognized this fact. I illustrate the principles
by reference to existing technologies, then point out how such applications might be extended to
new uses, the automobile in particular.
Finally, I challenge the absolute nature of the conservation of matter and energy. Using
the orbital mechanics of the solar system, I argue that energy might actually created from nothing
by the processes of orbital mechanics. While I doubt that our technologies will ever be able to
accomplish this on a usable scale, we should at least be aware that such processes exist if, in
fact, they do.

B. OPTICAL DIFFERENTIATION

The materials technology necessary to achieve optical differentiation does not presently
exist. It may or may not be possible to develop the necessary materials technology. In the
absence of such a technology, it may nonetheless be worthwhile to engage in mathematical
modeling. Based upon mathematical modeling, what materials technology would most effectively
and efficiently achieve optical differentiation? We should search for the most efficient models,
bearing in mind that we may need several competing models of varying designs and efficiencies.
Mathematical modeling has previously been employed to guide development of materials
technologies. Mathematical models tell us what qualities we should be seeking. They tell us how
we should attempt to relate the qualities of certain materials to those of other materials, whether
very similar or very different. Models offer us no assurance that we are actually engaging in a
feasible endeavor. Such are the risks and the potential rewards of the endeavor.
Mathematical modeling will be based upon the following premise: In nature, the
averaging of temperatures toward uniformity is a statistical process. There is no pressure of any
kind that impedes the flow of radiant energy from colder to warmer objects. The movement of
radiant heat is entirely random. It does not respond to attractive or repulsive fields, or to any kind
of pressure.
Radiant heat does not merely flow from warmer objects to colder objects as water flows
from higher to lower elevations. Heat also flows from colder objects to warmer objects. The
reason that temperatures average is because there is more heat available to flow from warmer
objects to colder objects that there is to flow in the reverse direction. The movement of heat being
entirely random, statistical probability favors temperature averaging.
Passive optical differentiation reverses the normal statistical averaging process that
governs temperature equalization, thereby creating differentiation where none would otherwise
exist.
We presently have an optical technology that accomplishes a similar function, although it
works only when visible light is present. We have films and coated glazings that are transparent
to visible light, but that reflect the lower wavelengths of ambient heat.
When used in a window, this technology admits light. The light then strikes some object
that absorbs it, and that then re-emits it at the lower frequency of ambient heat. When this re-
emitted radiation strikes the window at the lower frequency, it is reflected rather than transmitted,
thus retaining the heat in the room.

96
Passive optical differentiation would do this, but would admit the ambient external heat
as well as the light. Present materials will not do this. They transmit light in both directions,
while at the same time reflecting ambient heat in both directions.
There are two reasons why present materials function in the way that they do. First, there
is a rule in optics that paths that light of a particular wavelength travels must be reversible. If light
of a particular wavelength can pass in one direction, it can also pass in the other. As best I can
determine, this appears to be an ironclad, inviolable rule.
The second rule is that all surfaces of a single transparent, refractive material are equally
reflective. At present, all glazings (and, in fact, all transparent solids) have uniform indices of
refraction. These indices vary according to the material one selects, but are uniform within that
one material. Light striking the indoor surface of the glazing encounters the same index of
refraction as does light striking the outdoor surface, and is therefore reflected to the same degree.
Consequently, there is reason to believe that we cannot use a flat sheet of any glazing
material as a passive differentiator. Instead, we must attempt nonfocal intensification of received
radiation.
A convex lens can be used to focally intensify radiant heat just as it can be used to
intensify sunlight, but only if all of that heat originates at some specified distance. Focus is not
possible unless all radiant heat comes from approximately the same distance. Radiant heat taken
from the environment in general cannot be focused. It must be intensified by some nonfocal
means.
This means of intensification is merely a modification of the principle of a refracting
telescope. Refracting telescopes have cylindrical sides that are coated black on the insides so as to
absorb unwanted radiation. The nonfocal passive differentiator differs from the telescope in two
key respects. First, the sides are reflective rather than absorbtive. The goal is to pass as much
radiation as possible, regardless of its source or the distance of its source. Second, the tube is
conical rather than cylindrical. The lenses the differentiator might employ may be nearly identical
to those employed in a telescope, but are likely to be quite different because their purpose is quite
different. In fact, sequencing lenses is an unlikely strategy.
The goal is to pass radiant heat through a conical tube in which the diameter is
progressively shrinking. To do this, radiant energy is reflected back and forth from side to side.
However, there is a problem with a purely reflective strategy: Each time the light is reflected
within the tube, its angle becomes further removed from the path that would send it through the
small end of the cone. After a few reflections, it is reflected back out the large end of the cone.
This is why any strategy using sequenced lenses must use convex lens or lenses. Lenses
placed at the large end of the tube and at subsequent points will bend the rays of radiant heat
toward to small end of the cone, preventing the angle of sidewall reflection from becoming such
that the radiant heat is reflected back out the large end.
The functioning of the differentiator depends upon having a difference in surface areas
between the collecting and emitting surfaces. The emitter surface is a net emitter because it has a
much smaller collecting surface area than does the collector. To squeeze the collected radiation
through the smaller emitter surface area, the differentiator must intensify the radiation.
In fact, both collector and emitter surfaces collect in the same manner. Per unit of
collecting surface area, they collect to the same degree. Because there is a greater intensity of
radiant heat on the warmer side, the surface on that side will collect more per unit of surface area
than will the designated collector. This is why emitter surface area must be minimized.

97
Efficiency would be a function of two factors. First, the differentiator must maximize the
differential between collector and emitter surface areas. Second, the design must minimize
internal reflectance that would reflect collected radiation back toward the collecting surface.
Two strategies could be used to maximize the differential between collector and emitter
surface areas. First, the collector might place two or more collecting lenses optically in sequence
much as a telescope places multiple focal lenses in sequence.
Sequencing collectors would minimize emitter surface area, but would also increase
internal reflectance back toward the collector. Adding optical surfaces to the interior of a device
always increases internal reflectance. One could reach a point where losses due to internal
reflectance would exceed any gains that could be achieved by further reducing emitter surface
area. Consequently, the final design will be a compromise that will involve tradeoffs between
internal reflectance and the differential between surface areas.
In the second strategy, a single, solid, one-piece collecting cone might employ materials
having differing indices of refraction. This strategy would use a material that has a lower index of
refraction on and near the collecting surface. Using a material with a higher index of refraction
toward the emitting surface would help to bend the path of the radiant heat toward the emitter
surface, thus allowing a reduction in emitter surface area.
While it would be possible to glue materials having two indices together to make the
collector, it would be better to avoid having a clearly defined boundary if possible. This is
because any such boundary will be reflective to some degree, thus impairing the effectiveness of
the collector.
Once we develop a better understanding of how light travels through a refractive
medium, we may develop the ability to synthesize refractive media that have the desired refractive
qualities. With the ability to synthesize, we may be able to develop materials that have gradients
of refractivity. One surface would have a low index of refraction. As radiant energy passes
through the material, it would encounter an ever rising index of refraction.
If a gradually rising gradient of refraction could be introduced into the material, then the
path the radiant energy travels would curve as it passes through the material. Where the index of
refraction is lower, the path the radiant energy travels would be more parallel with the surface of
the collector, although still bent as to pass linearly through the emitter. As the radiant beams
encounter a higher index of refraction, they would be bent toward the emitter surface.
Whichever collection strategy one employs, the collectors would be embedded in an
insulating foam board. The emitting surface of the board would be coated with a heat and light
reflective coating so as to reflect incident radiation back toward the side to be heated. The only
holes in this reflective coating would be where the emitting surfaces of the collectors emit radiant
heat.
When placed in an environment of uniform temperature, the differentiating panel with its
imbedded collectors would differentiate that temperature into warmer and colder zones. If placed
in the exterior wall of a building, it could be used to heat the building in the winter by drawing in
exterior warmth. In summer, it could be used to air condition by directing interior warmth out.
In effect, such panels incorporated into the wall of a building would function as a heat
pump and air conditioner, except that the panels would not require any operating power. The heat
and air conditioning would be entirely free. Using a set of such panels in a well-insulated
building, one could regulate warmth by reversing the directions that the collecting and emitting
surfaces face. The greater the number of panels that face in the same direction, the greater the
output of the system.

98
Such panels would function in exactly the opposite manner as does insulation. Insulation
attempts to retard the flow of heat from one wall surface to the opposing surface. Differentiating
panels would be intended to promote as much flow as possible, but to control the direction of that
flow.
In an economic sense, the most practical initial use would probably be in refrigerating
food. Manufacturers could incorporate passive differentiators into the refrigerator case, reducing
and perhaps even eliminating the need for mechanical refrigeration. Internal temperature could be
controlled by placing sliding mirrors beside the emitters, if such control is needed. To prevent or
reverse excessive cooling, these mirrors could be slid over the emitters, reflecting heat back into
the unit. A second use would be to intensify the radiant heat being experienced by heat-converting
photovoltaic cells.

C. PHOTOVOLTAIC CELLS

The most serious problem associated with most of our present alternative energy sources
is that they are intermittent. They depend upon either the light of the sun or the movement of the
wind. Because their sources are intermittent, they must collect more than is being immediately
consumed during those times when they can collect. They must possess some storage mechanism
to store the surplus for times when the systems will be unable to collect.
This is why the absolute temperature of the environment must be considered to be the
optimum source. It is always and everywhere with us. It does not need to be either transported or
stored. We borrow it where and when we need it, obtain what we desire from it, and return it to
our environment. Use has no effect on it. It can be reused indefinitely.
Photovoltaic cells that would tap environmental radiance rather than sunlight would
function 24 hours a day all year. There would be no need for any storage or backup system.
According to my theoretical projections, light and electricity should share a common
theory because the two are based upon a common principle. The function of any photovoltaic cell
in electro-optics should therefore be like that of a diode in electronics, or more specifically like a
diode where the two opposing surfaces possess some degree of capacitance. The task of the diode
is to convert alternating current to direct current. Isnt this what a photovoltaic cell does to light?
Our practical experience with silicon diodes clearly suggest that this commonality is real
and of practical consequence. Any silicon diode that is exposed to light will function as a
photovoltaic cell, building up a voltage potential across its internal junction even when no voltage
is applied to the diode. In electronics, this is clearly undesirable. Such random voltages can
wreak havoc. Consequently, silicon diodes must be encased in something opaque so that the
silicon will not experience any light. Fortunately, silicon diodes do not convert environmental
radiance into electricity. If they did, it would not be desirable to use them in electronics.
Both alternating electrical current and light have oscillating frequencies. They differ
greatly in their frequencies, but not in their electromagnetic nature or in their possession of
oscillations. Photovoltaic cells convert light into a direct current electrical potential that can be
tapped by connecting a conductor between the two surfaces. This is the same way in which
conductors are connected to a silicon diode. Because light and electricity share a common
derivation, we can reasonably say that the silicon diode and the photovoltaic cell employ a
common principle. Phrased in terms of electricity, both convert alternating to direct current.

99
The task before us is therefore to develop a theory of the silicon diode that recognizes
that light and electricity are the same phenomenon. The present electron-hole doping theory
doesnt do this. This theory is based upon the hole movement theory of electricity. There is
nothing about this theory that suggests that light should produce a voltage potential across the
diode. Once we have this theory, we should be able to develop a theory as to how we can
synthesize materials that will possess photovoltaic abilities at the frequency range of ambient heat.
Neither silicon diodes nor photovoltaic cells exist in nature. The materials we use for
both are synthesized. Both consist of two layers of differing material that meet along a boundary.
What produces the photovoltaic effect is the boundary between the two layers rather than the two
materials individually.
The photovoltaic cell is identical with the silicon diode. Its very existence for converting
light to electricity is essentially a technological accident. Very early in their development, silicon
diodes were found to interact with light. This was not desirable in diodes, but provided another
possible use for the synthesized material.
It has been the needs of electronics that have dictated the direction of past research. Any
photovoltaic material that would convert ambient radiation to voltage potential would be highly
undesirable in electronics. Therefore, every effort has been made to avoid any such material
rather than to find it.

D. HOW MECHANICAL REFRIGERATION VIOLATES
ENTROPY

Having described the two basic kinds of nontransformational strategies, let us next
consider transformational strategies. Let us begin by describing one strategy with which we have
been violating entropy for several decades.
Mechanical refrigeration employs the latent heat of vaporization to violate entropy.
When the refrigerant fluid condenses, it gives off its latent heat of vaporization. When the fluid
vaporizes, it reacquires its latent heat of vaporization. The latent heat of vaporization is itself
explicable with reference to the principle of qualitative transformation.
When a liquid vaporizes, its molecules move away from each other. Anytime molecules
move away from each other for any reason, heat is always transformed into mechanical power.
However, this does not mean that the molecules must accelerate away from each other, acquiring
increased mutual velocity. What can and does happen is that this power is expended overcoming
the attractive forces that otherwise cause the material to cohere as a liquid.
When a liquid condenses, exactly the opposite takes place. The molecules move toward
each other, transforming mechanical power into radiant heat. The repulsive nature of this
transformation is overcome by the attraction of the molecules toward each other.
Condensation produces a higher concentration of radiant energy than would otherwise be
present in that environment. When dew condenses on the grass, both the air and the grass may
initially have had the same temperature. However, the act of condensation itself will give off
radiant heat, producing a greater intensity of heat at the surface of condensation than is present in
the surrounding environment.
The reverse happens when one places a wet cloth on ones face. The vaporization of the
water consumes radiant heat, causing the cloth to have a lower temperature than is present either

100
on ones face or in the surrounding air. This lower temperature is due to a reduction in the
quantity of radiant energy that contacts ones face.
The mechanical refrigeration cycle consists of three transformations. The first is the
transformation of mechanical power to heat by mechanical compression. Compression forces the
molecules of the refrigerant to move toward each other.
Anytime molecules move toward each other, this movement transforms mechanical
power to radiant heat. It makes no difference what the ambient temperature and pressure
conditions may be. The only factor that matters is the relative movement itself of the molecules
themselves in an environment that contains radiant heat.
In the case of mechanical compression, this transformation produces a greater density of
molecules and a greater intensity of radiant heat. Because the radiant heat is now more intense
within the refrigeration tubing (the condensing coil in particular) than it is in the ambient
environment, statistical averaging of the movement of radiant heat favors movement from the
refrigerant to the environment rather than in the reverse direction.
Because the molecules of the refrigerant are now closer to each other, they have a greater
propensity to condense into a liquid. Immediately after compression, however, the radiant heat is
sufficiently intense as to counteract the attraction of the molecules toward each other.
Condensation cannot take place until the intensity of the radiant heat declines. This decline takes
place as radiant heat departs the tubing for the ambient environment.
The second transformation takes place when the refrigerant condenses, releasing its latent
heat of vaporization. Most of the heat that had been produced by the act of compression has
already departed the refrigerant. Because condensation takes place at a temperature higher than
the ambient, the heat produced by condensation also departs the tubing for the ambient
environment.
The combined heat from mechanical compression and the subsequent condensation of the
refrigerant substantially exceeds the heat equivalent of the mechanical power that has been applied
by the compressor, generally equaling at least 200% to 400% of the mechanical power expended
in compression. The combination of the two transformations is what enables mechanical
refrigeration to violate entropy.
The third transformation is from heat to mechanical power. This takes place when the
liquid refrigerant vaporizes at the capillary tube or expansion valve. These devices allow the
refrigerant to experience a large drop in pressure. At this lower pressure, the refrigerant vaporizes.
Vaporization requires the refrigerant to reacquire its latent heat of vaporization, transforming heat
into mechanical power.
This third transformation causes the temperature in the lower pressure tubing (the
evaporator coil) to be lower than that in the ambient environment. Radiant heat moves from the
ambient environment into the tubing, again by the statistical averaging process by which
temperatures tend toward equalization.
We think of refrigeration as being a mechanical process because we must apply
mechanical power to compress the refrigerant to make the system function. If we use this process
to supply heat to heat a home by drawing that heat from the cooler ambient air mass outside the
building, then we call the system a heat pump. This term is a misnomer. Heat is not actually
pumped at all. Rather, the process works by engaging in transformations in opposing directions.
These transformations acquire and release the latent heat of vaporization.
The mechanical refrigeration cycle is familiar to everyone. From a theoretical point of
view, it is important that we note what is happening. We are violating entropy by drawing radiant

101
heat from the undifferentiated, absolute temperature of the environment, and are transforming that
heat into mechanical power. The mechanical power in question is used to overcome the attractive
forces that the refrigerants molecules have for each other. This happens as the refrigerant
reacquires its latent heat of vaporization.
If we can tap the undifferentiated temperature of the environment and transform that heat
into mechanical force for the purpose of refrigeration, then we should also be able to do so for the
purpose of powering our airplanes and automobiles. As is also the case with mechanical
refrigeration, we may not be able to obtain all of our power from environmental heat. However,
the more we do obtain, the easier it will be to transition to electrical power and biomass fuels.

E. THEORY OF THE FAN

It makes no difference why the molecules of air move away from each other, whether the
cause be natural or intentional. Anytime this movement takes place, heat will be transformed into
mechanical power.
It makes no difference what the ambient conditions are. Ambient conditions can induce a
transformation by allowing relative molecular movement, or they can prevent a transformation
from taking place by preventing such movement. However, the transformation itself is not a
product of ambient conditions. It is solely the product of the relative movement of molecules
toward or away from each other.
Consistent with the geometric premise from which I derived the principle of qualitative
transformation, all such movements are definable in terms of a single dimension. All that matters
is the change of distance between one molecule and another, coupled with the intensity of radiant
heat in the environment.
Transformation can take place in violation of ambient conditions. It is the ability to
induce such violation that enables us to use transformational processes to violate entropy. In the
case of the refrigeration compressor, the compressor is used to overcome ambient conditions. This
requires a net consumption of mechanical power, but produces a net heat gain that greatly exceeds
its energy equivalent in mechanical power.
In the case of a wind current rising and passing over a mountain range, the rising air
experiences a decreasing ambient pressure. This decrease allows the molecules to move away
from each other, transforming ambient heat to mechanical power, and producing a decline in the
intensity of ambient radiation. Air temperature declines as the air rises.
Gaseous pressure is a function of both particulate density and the intensity of radiant
heat. Transformation of radiant heat to mechanical power in the rising air produces a drop in
gaseous pressure as both particulate density and the intensity of radiant heat decline. The ambient
pressures of the atmosphere at the various altitudes limit this expansion. If there were no ambient
atmospheric pressure to limit this transformation, then the molecules would acquire ever greater
velocities relative to each other. Temperature would drop toward absolute zero. The atmosphere
would cease to exist.
As the wind passes over the mountains and down the other side, temperature rises as the
air moves downward into a higher ambient pressure. The molecules of air are now forced toward
each other by the rising ambient pressure. Mechanical power is being transformed into radiant
heat. Atmospheric temperature rises as ambient pressure rises.

102
A fan uses mechanical power to accelerate the movement of the molecules of air. This
acceleration causes the distance between the molecules to increase in a linear direction. This
distance increases because the molecules are not all being accelerated at the same time. Molecules
that reach the fan blades first are accelerated first, increasing the distance between them and those
molecules that follow immediately behind them.
Conversely, the operation of a windmill will cause deceleration, decreasing the distance
between molecules. A windmill producing electricity will therefore cause a transformation of
mechanical power into radiant heat. This is in addition to any electrical power that the windmill
produces.
Any increase in the distance between the molecules of a gas transforms heat into
mechanical power. This transformation does not have to be uniform in all directions. If distances
are increased in one direction but not in any other direction, then all of the transformation will take
place in that one direction. In the case of a fan accelerating air movement, the increase in the
linear distance between molecules produces a transformation of ambient heat into mechanical
power along the line of acceleration. When one sits in front of a fan on a warm day, the air that
the fan has accelerated feels cooler than when one is sitting in still air. In fact, the air is cooler.
The fan has transformed ambient heat into mechanical power, reducing the intensity of ambient
radiant heat.
The fan, turbine, and piston-powering cylinder all employ this principle. The primary
function of the cylinder is to shape the transformation of heat into mechanical power so that all
transformation will take place along a line that corresponds with the pistons movement. The
steam jet that directs steam toward a turbine accomplishes this same function.
In the case of the fan, the mechanical power produced by transforming atmospheric heat
is applied in equal components and in opposing directions. This equal but opposite relationship is
dictated by the conservation of gravitational momentum. One of the two components pushes
molecules linearly away from the fan blades. The other pushes toward the fan blades, ultimately
pushing against the fan blades themselves.
In the case of a propeller-driven aircraft or of a turbofan jet, the fan is being used to
propel the air toward the rear of the aircraft. This increases the distances between the molecules of
air, transforming atmospheric heat to mechanical power. Half of this power pushes against the
propeller or fan, pushing the aircraft forward. We are obtaining a portion of the aircrafts thrust by
transforming atmospheric heat into mechanical power.
Practical experience with the turbofan jet engine indicates that it is best to apply
mechanical power to as large a quantity of air as possible. It is better to move a lot of air a little
than it is to move a little air a lot. Engines that move very large quantities of air through their fans
are known as high bypass ratio engines. The bypass ratio is the ratio of air moved by the fan in
comparison to what goes through the combustion chamber. The higher the bypass ratio, the
greater the efficiency. This corresponds with the theoretical projection produced by the principle
of qualitative transformation.
As molecules of air are accelerated away from each other, the rate of transformation of
atmospheric heat to mechanical power declines. This is because the distance between the
molecules is increasing while the intensity of the radiant heat is decreasing. This means that the
number of transformational emission/reception events per unit of time is decreasing, while the
available ambient radiance is also decreasing. The mechanical power output per each such event
is therefore declining.

103
Therefore, the first unit of velocity applied to a molecule of air produces a greater
transformation of atmospheric heat into mechanical power than does the next unit of velocity.
Each unit of applied velocity produces a lesser transformation than did the preceding unit.
One can achieve greater efficiency by applying that second unit of velocity to a second
quantity of air. The more air that one can move with the mechanical power being produced by the
engine, the greater the degree to which the aircraft can obtain its thrust by transforming
atmospheric heat into mechanical power.
This scenario produces one obvious problem: Operation of the turbofan engine increases
atmospheric turbulence, which heats the atmosphere. If the scenario proposed is correct, then we
may be violating the absolute conservation of energy.
However, it is possible that the reduced intensity of radiant energy in the airstream
produced by the turbofan engine will compensate for the transformations that the turbulence will
produce. There is more relative movement among air molecules, but this movement takes place in
an environment that has a lower intensity of radiant heat. This is because some of the ambient
heat of the atmosphere has been transformed into mechanical power.
As arguments presented later in this chapter will make clear, we may need to challenge
the absolute nature of the conservation of energy. Conservation as an absolute may be limited to
two particle events such as those experienced by two atmospheric molecules that are moving
relative to each other. In an overall, systemic sense, conservation may not be absolute.

F. THEORY OF RESILIENT SOLIDS

We know that we cannot achieve perfect resilience. Imperfect resilience results in the
transformation of mechanical power into radiant heat. We need a specific theory to explain why
this transformation takes place, both so that we can understand the nature of this imperfection and
so that we might perceive any new possibilities that the principle associated with imperfection
might open to us.
This discussion suggests that the true opposite of imperfection is not perfection, but
rather superperfection. Superperfection would allow us to transform environmental heat into
mechanical power. The principle of imperfect resilience should allow us to achieve
superperfection, but not perfection. In a practical sense, superperfection could be quite useful.
Nothing could be more useless than perfection.
Let us use a hard rubber ball as the basis for our explanation. Let us place a three
dimensional coordinate system in the center of this ball, with the x and z axes being parallel with
the surface of the earth and the y axis being perpendicular to the earths surface. Now let us drop
the ball and observe what happens.
As the ball strikes the earth, it will deform. It will be compressed along the y axis and
will stretch along the x and z axes. Compression along the y axis transforms mechanical power
into radiant heat. Stretching along the x and z axes transforms radiant heat into mechanical power.
This mechanical power pushes against the molecular bonds that hold the ball together, storing the
transformed radiant heat as a kind of potentially mechanical energy. Radiant heat along the y axis
increases. Radiant heat along the x and z axes decreases. The result in an imbalance of radiant
heat.
As a result of these transformations, there is now a greater intensity of radiant heat along
the y axis than along the x and z axes. Similarly to what happens in the case of mechanical

104
refrigeration, radiant heat moves from the y axis to the x and z axes. When the ball decompresses
along the y axis, an event we call a bounce, there is less radiant heat along this axis to be
retransformed into mechanical power. Resilience is therefore imperfect.
The ball reforms to its original shape, but more slowly that what it had deformed. The
ball does not fully return to its original shape until after it has departed the surface of the earth.
The initial event, whether compressive or decompressive (stretching), always creates an imbalance
in the distribution of radiant heat within the material. Compression creates a relative surplus.
Decompression creates a relative shortage. Both surplus and shortage result in a natural flow of
ambient heat toward equalization. This flow is always from the compressed axis or axes to any
other axes that have experienced decompression. If the processes of compression and
decompression are then reversed, then the flow of ambient heat will also be reversed.
If the ball is uniformly heated, it will expand, retaining its uniform roundness as it
expands. If uniformly cooled, it will contract uniformly. However, it is possible to heat the ball
along one axis while simultaneously cooling it along two other axes. What happens when radiant
heat is unevenly distributed among the three axes is that the ball remains expanded along two axes
and contracted along the third. The ball cannot fully return to its original shape until the radiant
heat becomes evenly distributed among the three axes.
For radiant heat to be redistributed among the axes, a deficit must be created in one or
two axes relative to the remaining axis or axes. The elasticity of the bonds within the rubber
creates the necessary deficit. Heat then moves randomly from the higher intensity axis or axes to
the lower intensity axis or axes in accordance with statistical probabilities.
When we speak of compressive or decompressive events, we are generally speaking
about an event that takes place along either one or two axes, but very rarely along all three axes.
More often than not, the events that take place along the other axes will be of the opposing type.
When opposing compressive and decompressive events take place within the same
material, the difference between imperfect and superperfect resilience depends upon the
perspective that one selects from within the material. In the case of the hard rubber ball, only
events along the y axis were imperfect.
Those events along the x and z axes were superperfect. Had the ball been able to respond
to what was happening along the z and x axes instead of what was happening along the y axis, the
force of the bounce would have exceeded the force of its impact. This is because heat had been
drawn into these axes in conjunction with the transformational process. Whenever heat is drawn
into the transformation of heat into mechanical power, this heat augments the transformation, a
portion of it being transformed as well.
Nearly all events involving a resilient solid such as rubber include both imperfect and
superperfect components. The difference between imperfect and superperfect resilience depends
upon the sequencing of the acts of compression and decompression along the particular axis. If
compression precedes decompression, then resilience will be imperfect. If decompression
precedes compression, then resilience will be superperfect.
There is nothing that we can do to make practical use of supeperfect resilience in the
bouncing rubber ball. However, we may be able to use the principle associated with this
imperfection to achieve superperfection in another situation: automotive traction.

105

G. AUTOMOTIVE TRACTION

For an automobile to function, there are two transformations that must take place. First,
radiant heat or electrical potential must be transformed into mechanical power by an engine or
motor. The output of these devices is always rotary. The power is angular rather than linear.
Second, angular power must be converted into linear power. For angular power to be
converted into linear power, some event must take place at the point where the arc meets its
tangent. This is the one point where it is impossible to distinguish between an arc and a line. In
the case of the automobile, traction takes place where the rubber meets the road. The rubber is the
arc. The road is the tangent.
This conversion is the function of traction. Angular power must be applied to the surface
of the wheel and to the road or rail that it touches. This application of power must compress the
two surfaces to some degree.
The case of the rubber tire is like that of the hard rubber ball. Most of this deformation
takes place within the rubber tire rather than in the earth that it contacts. Consequently, most
transformations will take place in the rubber. A smaller portion will involve the road.
This compression then applies a forward pressure against the axle bearing. It is this
forward pressure that produces forward movement, a linear rather than an angular form of power.
We have studied engine and motor design much more extensively that we have studied
the theory of traction. The implications of qualitative transformation suggest that we should place
a much greater emphasis upon the theory of traction than we have in the past.
We are presently achieving imperfectly resilient traction. This is because accelerative
traction compresses the rubber along the line of vehicular propulsion. It might be possible to
achieve superperfectly resilient traction, thereby using the transformations involved in traction to
augment the mechanical power output of the engine or motor. If this can be done, then the tires
will assist in powering the vehicle.
There are two kinds of traction, compressive (imperfectly resilient) and decompressive
(superperfectly resilient). Compressive traction pushes the vehicle forward by compressing both
the tire and the earth with which it interacts. With the exception of braking, all traction that I have
observed has been compressive. It appears to me that all vehicular acceleration must involve
compressive traction, and that all braking and associated electrical generation must involve
decompressive traction.
Compressive traction is inherently wasteful. This is because compression in the direction
of vehicular acceleration produces imperfect resilience. Compression transforms mechanical
power into heat along the axis of acceleration. Because the intensity of heat along this axis is
greater than along the other two axes, radiant heat departs the axis of acceleration for the other
axes. The result is that neither the earth nor the tire fully decompress along the axis of
acceleration immediately after the tire breaks contact with the surface of the earth.
This is exactly the same process that takes place along the perpendicular axis of the
bounce in the bouncing rubber ball. In the case of the rubber ball, most of the subsequent
decompression along the perpendicular axis takes place while the ball is in contact with the
surface of the earth. If this were not the case, then there would be little or no bounce. However,
the bounce does not fully recover the force of impact because resilience is imperfect in this axis.

106
At first glance, it might appear that the case of compressive traction differs from that of
the bouncing ball. In the case of traction, no decompression takes place until after the rubber
breaks contact with the road. However, we must remember that in both cases somewhat
compressed rubber finds itself pushing against even more compressed rubber. In the case of the
rubber ball, the most compressed rubber is located in the perpendicular axis at the surface of the
earth. The least compressed rubber in the perpendicular axis is located at the top of the ball.
In the case of the rubber tire, the greatest degree of compression is located along the rear
of the tires contact with the road. As the rubber leaves contact with the road, it must decompress
by pushing against this most compressed rubber. Upon careful examination, we note that this is
exactly what happens when the ball bounces. The less compressed rubber at the top of the ball
pushes against the more compressed rubber that is in contact with the earths surface.
By contrast, decompressive traction reduces the intensity of radiant heat along the axis of
acceleration. Radiant heat then moves from the other two axes into the axis of acceleration. As
the tire is then allowed to recompress along the axis of acceleration, this recompression is slightly
less than total because the tire began recompression with a surplus of radiant heat along the axis of
acceleration. Full recompression is delayed just as the bouncing ball does not fully recompress
along its horizontal axes until after leaving the earths surface.
Decompressive traction reverses the flow of radiant heat that characterizes compressive
traction. Because of the heat flow involved in its process, compressive traction produces
imperfect resilience. The process is like that of the bouncing rubber ball. Because decompressive
traction would reverse the sequence of events in the process, theory suggests that it should
produce superperfect resilience, transforming environmental heat into mechanical power.
We can imagine the molecules of rubber as being like the molecules of air being moved
by the turbofan jet engine. For our perspective, we can adopt the point at which the rubber first
meets the road. We will retain this perspective as the vehicle moves forward, thus moving with
the vehicle.
What we observe during decompressive traction is that the molecules of rubber accelerate
away from our perspective as they move toward the rear of the vehicle. However, they are not
allowed to move freely. They are instead tied to the sidewalls of the tire. These sidewalls transmit
the acceleration of the rubber molecules to the axle in the center. The acceleration of the rubber
molecules thus applies an angular force to the axle. The axle can thus harness the force of
molecular acceleration just as the water wheel harnesses the force of gravitational acceleration.
Decompression while in contact with the earths surface applies both linear and angular
forces to the axle because the process has something to push against, the earth. The linear force
retards the forward movement of the vehicle, while the angular force accelerates axle rotation, thus
partially counteracting the linear force.
This situation is like that of the bouncing rubber ball, only reversed. In the case of the
rubber ball, the recompression that takes place after the ball departs the earths surface does not
add to the mechanical power of the balls bounce. In the case of the rubber tire, however, the
recompression that takes place after the rubber leaves the road takes place more slowly than the
decompression did. This applies additional angular force to the axle. Slowed decompression
increases superperfect resilience.
The suggestion that follows is one that I find personally difficult to believe. Critical
thinker though I try to be, I am a product of my times and my culture. However, theory indicates
that this suggestion must be made. The true function of theory isnt to assist us in doing what we

107
are already doing, but rather to help us see possibilities that we would otherwise overlook. This is
clearly such a possibility.
It should be possible to use decompressive traction in vehicular design for the purpose of
producing considerably more electrical power or hydraulic pressure than the mechanical power
that it consumes. This net gain can then be used to power the vehicle. It should be possible to
draw ambient heat into the transformational process, achieving this net gain by transforming heat
into mechanical power. If these is enough of a differential between applied linear mechanical
power and the net gain, then it should be possible to use this process to power vehicles.
Vehicles fully powered by this means would require no fuel of any kind. They would be
entirely nonconsuming and nonpolluting. All that they would need would be tires and some
battery or hydraulic pressure storage capacity for starting and to augment power on steep hills. It
should be possible to travel from the U.S. east coast to the U.S. west coast using only the electrical
or hydraulic power that one obtains from decompressive traction.
43

When discussing technological possibilities that might be achieved using decompressive
traction, we must bear in mind that present tire design favors maximum efficiency during
compressive traction. The tires are designed to minimize the degree to which heat is generated by
traction, thereby minimizing the transfer of heat to other axes within the tire. For decompressive
traction, however, the goal is to maximize transformation and thus to maximize such transfer.
Consequently, the tire design best suited for decompressive traction will be the worst
suited for compressive traction. Therefore, one should not expect to achieve good test results
using off-the-shelf tires. One would need to begin by designing and building the worst tires that
one could possibly imagine. They would heat very seriously during compressive traction.
The ideal test facility would be a road that has a gradual downslope at least seven
kilometers long. The ideal test vehicle would be an electric car. This is because a motor
connected to the tires especially designed for decompressive traction will act as a generator during
decompressive traction. The goal would be to let gravity power the vehicle, while maintaining
some predetermined speed by regulating the rate of electrical generation.
To test for the existence of superperfect resilience, one can compare the velocities of the
vehicle under varying generation loads. To whatever degree generation reduces velocity by
gravitational deceleration, it should generate a quantity of electricity that exceeds the mechanical
power equivalent of the lost velocity.
The critical component in decompressive traction is tire design. One goal of the tests
would be to determine which tire design generates the most electricity during its run. The greater
the electrical generation relative to the loss of vehicular acceleration, the greater the
decompressive efficiency of the tire.
Having designed and tested tires especially for decompressive traction, we can then begin
designing vehicles that exploit decompressive traction. Decompressive traction cannot be used to
produce mechanical power directly. It can only be used to produce either electricity or hydraulic
pressure. Although the remainder of this discussion uses only the language of electricity, it is
likely that hydraulic pumps powering hydraulic motors would actually be the more efficient.
Generation must produce some linear drag. The goal is to minimize drag while
maximizing generation. To do this, it would be best to employ several decompressive tires rather
than fewer tires. One should seek to maximize the decompressive tractional surface area, the area
where the rubber meets the road.
This is because the system depends upon establishing a high ratio of electrical power
output (angular) to induced linear drag. With regard to any unit of tractional surface area, the first

108
unit of applied linear drag will produce more electricity than will the last unit. Therefore, the goal
is to spread out the linear drag over as large a tractional surface area as possible to maximize
electricity production.
Increasing linear drag will increase decompressive traction and electricity output.
However, there will be some optimum level of induced linear drag beyond which increasing linear
drag will not produce enough electricity as to be worthwhile. This is because the transformation
of mechanical power to electricity and back to mechanical power (or mechanical power to
hydraulic pressure and back to mechanical power) involves losses. At all times, the net gain from
decompressive traction must at least offset these losses. Hydraulic systems may do this more
efficiently than electrical systems.
The electricity generated by decompressive traction can then be used to power a motor
being used for vehicular propulsion. Some portion of the mechanical power output of this motor
must be used to overcome the linear drag induced by decompressive traction. The remainder will
be used for forward propulsion.
Decompressive traction would produce a net electrical output because the linear drag
applied to the vehicle by the tires would be substantially less than the angular drag applied to the
tires by the generator. This difference in the degrees of drag produces a net gain in the vehicles
electrical supply. The difference results from superperfect resilience.
In one possible vehicular configuration, the vehicle might have ten tires. Two tires would
be located on a single axle powered by a motor and equipped with tires designed for maximum
efficiency in compressive traction. These tires, located at the front of the vehicle, would be used
for propulsion and steering.
The remaining eight tires would be in twin tandem arrangement in the rear of the vehicle.
Each pair on each side would power a generator. This would allow each pair to spin freely
relative to the opposing pair on the opposite side. The system would require a total of four
generators.
[2006 note: Recent designs by engineers at Kaio University in J apan place four tires in a
dual axle configuration in the front, and four additional tires in a dual axle configuration in the
rear. This would probably be the optimum configuration. Two tires in the front would function
compressively, two decompressively. Same in the rear.]
What determines the vehicles net electrical generation capability is the number of tires
engaged in decompressive traction, and the amount of angular drag applied to each tire by the
generator it serves. It should be possible to have many tires relative to vehicle weight. The
generation system will not encounter significant difficulties with low vehicle weight per tire until
this weight becomes quite low, although this new vehicle design would create numerous handling
and braking difficulties that would need to be solved. [Kaio design may have solved.]
In its principle of operation, this vehicle would exploit a cycle of opposing
transformations similar to that of mechanical refrigeration. In its theoretical basis, electricity is the
same phenomenon as radiant heat. This being the case, I will describe the
decompressive/compressive tractional system in term of heat rather than electricity. This makes
the parallels between this cycle and mechanical refrigeration easier to visualize.
The superperfect resilience of decompressive traction draws environmental heat into the
transformational process. This heat is drawn in by the creation of temperature differentials among
the axes within the tire. A relative deficit is created along the axis of acceleration, and a relative
surplus along the other two axes. Heat then moves from the other two axes into the axis of

109
acceleration, just as in refrigeration it moves from the ambient environment into the evaporator
coil.
This movement is a result of the transformation of heat into mechanical power. This
transformation resulted from the stretching of the rubber along the axis of acceleration.
Vaporization of the refrigerant at the capillary tube or expansion valve uses exactly the same
transformational principle, increasing distances between molecules to create a deficit of heat
intensity into which ambient heat can flow.
The movement of radiant heat into the tires axis of acceleration then further augments
this transformation by enabling the molecules of rubber to move further apart along this axis. It is
this augmentation that allows the tire to achieve superperfect resilience along its axis of
acceleration.
Superperfect resilience transforms the heat that enters the axis of acceleration into
mechanical power, the power of the tires rotation. This mechanical power is applied to the
generator. The generator then transforms the mechanical power of tire rotation into heat
(electricity), which is transmitted to another axle to power compressive traction. The motor
powering this other axle converts heat (electricity) into mechanical power that it then applies to
compressive traction.
Because compressive traction involves imperfect resilience, a portion of the applied
power is then transformed back to heat with the tire. In principle, what happens in compressive
traction is exactly the same as what happens in the condensing coil during the refrigeration cycle.
Molecules move toward each other, transforming mechanical power into heat. In the tire, this
creates a net surplus of heat along the axis of acceleration relative to the other axes. This heat then
moves to other axes to equalize temperature, much as heat moves from the condensing coil to the
ambient environment.
If compressive traction did not effect this lossy transformation, then superperfect
resilience could not exist, in which case this cycle could not function. If the latent heat of
vaporization did not produce phenomena that are fully reversible, then the mechanical
refrigeration cycle could not function either.
The goal of the tractional cycle is the same as that of the refrigeration cycle. The goal is
to maximize the gains from one transformation while minimizing the losses from or energy
consumption of another. The refrigeration cycle requires either the application of mechanical
power or the existence of ambient temperature differentials to function. Theory suggests that the
decompressive tractional cycle should not require either.
It is possible to use differing ambient temperatures to reverse the refrigeration cycle,
producing equalized temperatures and a net output of mechanical power. Using the
decompressive/compressive tractional cycle, it should be possible to obtain net mechanical power
without the presence of any ambient temperature differential at all.
This is because this cycle generates its own temperature differential along a single axis,
the axis of acceleration. It then draws heat from the other two axes. Because the
compressive/decompressive cycle generates its own temperature differential, it can substitute two
design differences for any differentiation in ambient temperature: tire design, and the differential
in tractional surface areas between compressive and decompressive components of the overall
system.
This ability to generate its own temperature differential to provide mechanical power
should enable the tractional cycle to accomplish technological feats that are beyond the

110
capabilities of mechanical refrigeration. This cycle should be able to violate entropy even more
effectively than refrigeration.

H. THE IMMOVEABLE EARTH

The rule of the conservation of gravitational momentum states that for every action there
must be an equal but opposite reaction. Relative to some central perspective, the center of mass,
the momentums imparted to the two opposing bodies must be equal.
To illustrate this principle, I will imagine myself to be in outer space, floating
weightlessly. I use my arms to push against a five kilogram rock. To push against that rock, I
place my feet against a ten kilogram rock. If I accelerate them to a mutual velocity of three meters
per second, two meters per second of that velocity will be imparted to the five kilogram rock, and
one meter per second will be imparted to the ten kilogram rock. Okay, so Im neither an astronaut
nor an athlete. Would you believe centimeters per second?
This illustration is simple and self-evident; that is, it is self-evident if one is dealing with
gravitational mass from within the gravitational perspective. This is the one perspective from
within which gravitational momentum is conserved.
Let us now move this situation to the surface of the earth. I will place the five kilogram
rock on a dolly that can move along the surface of the earth, totally without friction or resistance
of any kind. Obviously, I cant find any such dolly in real life, but for purpose of argument I will
assume that I can. Instead of pushing against the ten kilogram rock in space, I will now push
against the surface of the earth. With my feet on the earth, I will push the dolly forward along the
surface of the earth until it reaches a velocity of three meters per second.
If the conservation of momentum is absolutely true, then I should impart equal but
opposing momentums to both the rock and its dolly, and to the mass of the earth. Right?
Wrong! I can impart equal momentum to the earth if and only if it is an absolutely
simple, rigid ball which it obviously isnt. To impart equal momentum, I would have to be able
to affect the entire mass of the earth simultaneously. This I cannot do.
There is a time delay between the time my feet affect the spot on which they stand, and
the time that my pushing is experienced elsewhere on the earths surface, even if this elsewhere is
only a few meters away. As I apply force in some direction to the surface of the earth, I stretch the
crust slightly in one direction and compress it in another. Any time delay involved in transmission
must involve transformations. This is how the crust of the earth absorbs and transforms the power
of mighty earthquakes.
If our earth were absolutely rigid, it would be a perfect transmitter of sound. The velocity
of sound would approximate the velocity of light. Imagine what our world would sound like!
Even very rigid steel railroad tracks can only transmit the sound of heavy trains for a few
kilometers. Sound is absorbed and transformed into radiant heat because sound has a much slower
velocity than light. The slower the velocity of sound through any medium, the greater the degree
of compression and decompression involved in transmitting the wave. Compression and
decompression are what slow the sound wave. The greater the degree to which this happens, the
more quickly sound is absorbed.
The combined acts of decompression and compression produce a net transformation of
mechanical power to heat. There are two reasons for this. First, compression increases the
amount of heat that can compound during the transformational process, whereas decompression

111
reduces the amount of heat that can compound. Second, pushing molecules toward each other
increases the rate of compounding, while pulling them away from each other decreases the rate of
compounding. This is how objects that are far more solid and far more rigid than our earth,
objects such as steel railroad tracks, can absorb and dissipate the forces of mechanical shock
waves.
In the case of fluids, these two reasons combined are why any induced fluid turbulence
tends to lessen, transforming the motion of its molecules into radiant heat. It was by such a
process that J ames J oule experimentally determined the mechanical equivalent of heat in 1843.
There will, however, be some residual level of relative particulate motion produced by the random
movement of photons within the fluid. This residual level rises as temperature rises, and will
ultimately vaporize liquids if the temperature rises high enough.
We have no ability whatsoever to alter the angular momentum of our planet, regardless of
the scale of power that we apply. What this means is that our assumption of vector cancellation is
in error, and is undoubtedly wasteful.
Our earth-based motors, engines, and generators all apply angular forces to the mass of
the earth, whether they are bolted down or gravity holds them in place. These angular forces
become vectors that attempt to rotate the earth in one direction or another. We presently assume
that this application of vectors does not waste energy because the many vectors applied to the
earths surface will cancel out each other.
We assume that if we apply a nearly infinite number of vectors in random directions, then
vector addition should produce a sum of zero. This sum of zero is a product of statistical
probability, just as temperature averaging is also a product of statistical probability.
This, however, assumes that any of these vectors would actually have the ability to alter the
angular momentum of the earth. None of them actually do. Vectors cannot cancel unless they
have the ability to interact with each other. If the earths crust absorbs the applied forces, then
they cannot interact although it still may be possible to interact with other vectors that are
geographically relatively close. And this says nothing at all about the liquid center of our planet.
We should bear in mind that the mass of our planet is approximately 99 percent liquid,
the entire mass beneath the crust being liquid. J ames J oules experimental apparatus by which he
determined the mechanical equivalent of heat was also approximately 99 percent liquid. That
experiment transformed mechanical power into heat. What is there, if anything, that distinguishes
our application of mechanical power to the mass of the earth from J oules experimental apparatus?
J oule applied the mechanical power of a string-attached falling mass to a paddle in a
container of water. The paddle created turbulence in the water. J oule then measured the resulting
increase in water temperature. In a practical sense, we were merely repeating J oules famed
experiment. We are applying mechanical force to a massive body of liquid, and somehow
expecting to achieve results that contradict J oules. Didnt J ames J oule tell us what we should
expect?
Therefore, from both theoretical and practical perspectives, it is best to assume that
vectors do not cancel. However, this does not mean that the 50 percent of our mechanical power
that we apply to the earths surface is entirely wasted. A maximum of 25 percent can be wasted.
Half of that power that is applied to the surface of the earth and that is thereby subject to
transformation must, by conservation of momentum, push back in the direction of shaft or
armature rotation. This is because the many individual transformational events produce equal and
opposite momentums of gravitational mass. This is essentially the same as the theory of the fan.

112
One obvious practical application of this principle would be in electrical power
generation. Two generators, each of equal sizes but rotating in opposing directions, could be
placed adjacent to each other, side by side, and very rigidly attached to each other. When
operating at equal electrical power outputs, they should cancel the vectors that each would
individually apply to the mass of the earth. The increased electrical power output per unit of
steam input could be as great as 33 percent of the present output that can be obtained from the
steam input, and would necessarily be limited to this increase.
A second very obvious practical application would involve mechanical refrigeration.
Two compressors could be attached to each other, again side by side, and operating in parallel
with each other, but with motors rotating in opposing directions. Combined in this way, they
should produce greater cooling than the two compressors could produce individually. In a
practical sense, this is unlikely be of practical value unless one is dealing with larger commercial
refrigeration and air conditioning systems.
This discussion raises questions about J oules experimental derivation of the mechanical
equivalent of heat. His results were extremely conscientious. However, he assumed that he was
dealing with a single body of liquid, the water he had placed in his container. What he did not
understand, and had no way of knowing, was that he was dealing with two bodies of liquid. The
second body of liquid was the earth itself. His experimental apparatus was rigidly attached to the
earth, if by nothing other than gravity. What was it that prevented the water in his apparatus from
assuming a permanently spinning relationship relative to the surface of the earth? How did
interaction with the earth slow this spinning relationship, ultimately bringing the waters to an
apparently resting state relative to the surface of the earth?
Do we need to revisit the experimental derivation of the mechanical equivalent of heat,
finding other means by which to derive it? Perhaps our assumption that the many vectors applied
to the surface of the earth cancel has led to a previously unnoticed derivational error.
Perhaps we are attributing inherent efficiencies to technologies that are, in fact, far more
efficient than we understand. We are assuming that the inefficiencies we mathematically infer are,
in fact, products of the technologies themselves and not of our relationship with a fluid earth.
This suggests that proper management might produce sudden, and quite unexpected,
improvements in practical outputs improvements accompanied by sharp declines in energy
consumption. We already possess the technologies. We need merely manage the ways that we
apply the technologies. Management in and of itself, independent of technological innovation,
will produce the improvements.
Another application of this relationship of canceling vectors involves automotive traction.
Earlier in this chapter, I proposed an experiment using a downhill slope to test the practical value
of decompressive traction. By itself, decompressive traction applies a vector to the surface of the
earth in a particular direction, in the direction of the vehicles movement. By contract,
compressive traction applies a vector in exactly the opposite direction.
Therefore, a vehicle that employs both compressive and decompressive traction should
achieve a high degree of vector cancellation. This high degree of vector cancellation will augment
the power obtained by decompressive traction and that is then applied to compressive traction.
Therefore, experiments based solely upon decompressive traction will not reveal the full potential
of the tractional cycle. Decompressive traction will make compressive traction more efficient, and
vice versa if both are applied in the same vehicle and in close proximity with each other.
The opposing natures of compressive and decompressive traction can be easily observed
on any American street. Rapid acceleration (compressive) from a stopped position tends to raise

113
the front of the vehicle relative to the rear. Maximum braking (decompressive) tends to lower the
front of the vehicle relative to the rear. The decompressive-compressive tractional cycle would
mutually cancel both of these tendencies.
In all such situations, there are multiple perspectives within which momentum is being
conserved, each such perspective being between two (and only two) particles. The many
perspectives within which momentum is conserved will be in motion relative to each other,
although this motion may be very slight. The reason that momentum is not conserved with respect
to the earth as a single unit is because there are multiple transformations taking place. Each
transformation has its own perspective from within which momentum is being conserved, meaning
that there is no single perspective. In and of itself, the implications of the existence of such
multiple perspectives is theoretically important.

I. ORBITAL MECHANICS AND TIDES

We like to think that the moon rotates around the earth. While this appears to be the case,
Newton orbital mechanics state that the two bodies actually orbit around a common point, their
center of mass. Because this is the case, they function as if they were a single body rotating
around a single axis just as the earth itself rotates around its axis.
Imagine the earth and moon as being two children sitting on a teeter-totter. The moon,
being much lighter, sits far out away from the fulcrum on which they rock. The earth, being much
heavier, sits very close to the fulcrum. Because of this difference in distance, the two apply equal
leverage at their fulcrum. They balance. The fulcrum is their center of mass.
The masses of the moon and earth balance at their combined center of mass around which
they rotate. This orbit takes place in response to the mutual gravitational attraction of the two
bodies for each other. So as to counteract this mutual attraction, the orbit establishes a mutual
relationship of centrifugal force. The centrifugal force of the orbit exactly counteracts the mutual
gravitational attraction.
If the balance of this orbit was perfect for all points on the earth, then the centrifugal
force that these points experience as a result of the combined orbit would exactly equal and
counteract the gravitational attraction of the moon.
While this balance is perfect with regard to the masses of the two bodies considered as
wholes, the balance is imperfect with regard to particular points on the surface of the earth.
Mutual gravitation and centrifugal force oppose each other. As the earth rotates on its axis, any
given point on the earths surface will experience alternating increases and decreases in lunar
gravitation due to changing distances from the moon, and inversely alternating changes in the
centrifugal force of the shared orbit. Therefore, these points on the earths surface experience
imbalances that the earth as a whole does not.
This inverse alternation produces a disparity between lunar gravitational attraction and
the centrifugal force of our common orbit. This inverse alternation produces the tides and likely
contributes to other geophysical phenomena.
One the side of the earth facing the moon, the gravitational attraction of the moon
exceeds the opposing centrifugal force of the orbit. This attracts the oceans waters toward the
moon. On the side away from the moon, the centrifugal force of the orbit exceeds the opposing
gravitational attraction of the moon. This throws the oceans waters away from the moon.

114
The net effect in both cases is to distort the shape of the earth slightly, producing two
tides. However, it is more than the oceans that are affected. The entire mass of the earth on the
two sides is affected. This being the case, the tides may primarily be shock waves that reflect this
effect, rather than waters that are being affected independently of the earths crust. If this is the
case, then the tides are manifestations of the earths liquid center.
We are accustomed to think of our oceans are vast. They are, in fact, a relatively thin
layer on top of an also thin crust. Below this is a volume and mass of liquid that makes the
vastness of our oceans look like nothing at all.
The two disparities should continually flex the crusts tectonic plates and the lines along
which they meet. This flexing may be the primary cause of plate movement. The plates would
not move if they were not being continually flexed. If flexing produces movement, then it should
also cause strains to develop along faults where movement has been halted by friction between the
plates. If the same forces that produce the tides also produce plate movement, then there should
be a correlation between unusually strong tides and earthquake danger.
My primary reason for mentioning the tides and the distortion of the earths shape is to
point out the conceptual difficulty with the absolute conservation of energy. Distortions result in
the transformation of mechanical power to heat, just as any form of fluid turbulence will transform
mechanical power to heat as the turbulence settles. This was the transformation the J ames J oule
measured to determine the mechanical equivalent of heat in 1843.
Does this transformation of mechanical power to heat produce an equivalent reduction in
the rate of the earths rotation relative to the moon? If the conservation of energy is absolute, then
it must. However, these transformations do not affect the entire mass of the earth simultaneously.
In this, they are like the mechanical forces that our machines apply to the earth, only on a vastly
larger scale.
To alter the relative rotational momentum of the earth, the effects of these
transformations must be mechanically transmitted simultaneously to the entire mass of the earth.
This is particularly difficult when dealing with a relatively thin crust and a liquid center that can
be subjected to some degree of turbulence. Because this transmission involves delays and because
these delays involve compression and transformation, I doubt that there is an absolute equivalence
between the quantity of heat produced and any change in the earths rotation. The former exceeds
the latter, meaning that energy is, in fact, being created from nothing.
In making this statement, I must add one caveat regarding the nature of quantity: It is
likely that all such quantities are relative. There may be no such thing as absolute quantity.
Consequently, what is created is relative quantity rather than absolute quantity. The conservation
of energy that has always assumed absolute conservation has also assumed the existence of
absolute quantity.
The larger and more fluid the planet, the greater its ability to resist changes in its orbital
rate. Consequently, the greater the degree to which it is likely to engage in creation.

J. ORBITAL MECHANICS WITHIN THE SOLAR SYSTEM

A common theory of the origin of the solar system has assumed that the various planets
and moons came into being as clouds of dust that gravitationally coalesced into spherical objects.
The arguments that follow assume this to be the case, and use the phenomena of tides and crustal
distortion proposed in the previous section to explain certain phenomena of the solar system.

115
Spin on one toe with your arms outstretched. Now draw in your arms. What happens?
Your rate of spin increases. This is because angular momentum is being conserved. To
compensate for your reduced diameter of rotation, your rate of rotation must increase.
Let us apply this principle to a coalescing dust cloud that is orbiting around a planet. As
the cloud coalesces, its rate of rotation relative to the planet increases. At the same time, the
ability of the cloud to retain heat within itself also increases. Because the particles are getting
closer to each other, the turbulence produced by its orbital relationship with the planet also
increases.
If the rate of orbital rotation is sufficiently great, its size sufficiently large, and the
gravitational/centrifugal interactions with its planet sufficiently strong, then the body may never
be able to coalesce into a solid/liquid moon. As the nascent moon approaches coalescence, the
heat buildup within the coalescing body may become sufficiently great as to cause the body to
explode.
I suggest that this is how the rings of Saturn were formed. If such an explosion had taken
place and the debris then ground against each other while in orbit, then the explosion should have
produced a ring. This ring should have two areas of maximal thickness located 180 degrees apart,
and two areas of minimal thickness, each offset 90 degrees from the areas of maximal thickness.
One of the areas of minimal thickness should correspond with the nascent moons position at the
time of the explosion.
The areas of maximal and minimal thickness should not rotate along with the debris that
constitute the ring. Rather, the debris themselves should rotate successively through the areas of
maximal and minimal thickness.
Those clouds that are able to coalesce but are relatively small and are very strongly
attracted by their gravitational/centrifugal interactions with their planet should have their orbits
slowed relative to the planet. Eventually, they will establish an orbital relationship in which the
same side of the moon always faces its planet or orbital partner. This is why the same side of the
moon always faces the earth, and why the rotation of Mercury around its axis is very slow relative
to its rate of orbital rotation around the sun.
Once the moon establishes a relationship where one of its sides always faces its planet,
there will no longer be a gravitational/centrifugal relationship that will produce heat within it. If
the core of the body was dependent upon such a relationship to maintain its liquid state, then the
core will gradually cool and solidify.
Finally, there are the larger bodies that experience weaker gravitational/centrifugal
relationships. They will be able to retain their orbits around their own axes relative to their orbital
partners. The smaller and more strongly affected bodies may experience gradual slowing of their
rotational speeds. If they fully solidify while still possessing relative rotation, then they should
retain their remaining rotational rates indefinitely.
The even larger and less affected bodies may experience very little if any slowing of
rotational rate, despite the presence of gravitational/centrifugal relationships that produce heat.
Virtually all of the heat these events produce will be by creation rather than transformation of
rotational rate.
Even if genuine creation is taking place within our solar system, I doubt that man will
ever be able to create any usable quantity of energy in any practical sense. However, the existence
of processes by which such creation can take place should be carefully noted. Such notation
should cause us to more carefully consider our patterns of thought.

116
The conservation of energy is the statement by which we justify our use of the equals
sign. We should not apply the equals sign as an absolute where true equality of input versus
output may not exist. In any such situation, our use of the equals sign should be carefully
qualified.

K. THE MEANING OF MANAGEMENT

The goals of scientific theorization and those of engineering differ in one key respect.
Scientific theorization generally attempts to be an inductive process searching for a general theory.
Francis Bacon advocated such use of inductive reasoning. He was confident that our God-given
abilities would allow us to induce accurately. We would be able to discover the laws that govern
the universe.
By contrast, the primary tasks of engineering involve deductive reasoning. Engineers do
not attempt to discover general theories. Instead, engineers like to work from a general theory or
principle toward specific, practical applications that employ that theory or principle. Engineering
is always more efficient in the management of its tasks when it possesses some general statement
from which it can deduce the possibilities that are open to engineers.
General statements allow engineers to perceive possibilities, including those such as
decompressive traction that would otherwise escape their notice. In the absence of a general
statement that enables engineers to challenge their own perceptions, they have little choice but to
plod ever forward doing what they have done in the past, or some variation on it.
When I write of the need to manage, I refer to the need to structure the tasks of
engineering based upon some understanding of principle. In the absence of some principle that
enables engineers to see beyond our present technological culture, they have little choice but to
continue within our present cultural patterns.
I hope that my work will increase the abilities of engineers to manage their endeavors,
and to break with longstanding cultural patterns that ultimately promise us little more than grief.
To the extent that my work has included mistakes, I hope that those mistakes will prove
stimulating and useful to others. I freely extend access to my mistakes in the hope that others will
be able to create something useful by correcting them
The greatest human error doesnt lie in the error itself, but rather in the failure to try. We
cant adapt without trying.


117


CHAPTER FIVE:
RESONANT TRANSFORMATIONS AND
ELECTRON SHELLS

The principle of qualitative transformation states that information cannot transit from one
inertial perspective to another inertial perspective without altering the mutual velocity of the
sending and receiving particles. This statement solves the question of the conservation of energy
when one transits from one inertial perspective to another, which was the question that underlay
Einsteins special relativity.
At first glance, it would appear that conservation would dictate that the reverse must also
be true that conservation would require that a qualitative transformation could take place only
when there is a net transfer of information from one inertial perspective to another.
Despite what common sense would initially seem to dictate, this is not the case. Even
though there may be no net change in the distances between two particles, or in their net inertial
perspectives, qualitative transformation can still take place. The net result will be an increase in
the total information present to a degree not predicted by the inverse squared law.
This net increase cannot take place to any significant degree at just any distance, but can
only take place at certain very specific distances. At these distances, it is possible for
transformational events to alter the inverse squared law of field strength, increasing strength to
levels far above those predicted by the inverse squared law.
At certain regularly spaced and predictable distances, which we might call 1, 2, 3, 4, 5,
the inverse squared law will not apply. The gaps separating these distances are all of identical
width. Within these gaps, the inverse squared law is modified little if at all.
When one projects this situation into a three dimensional context, one gets a set of
regularly spaced, concentric shells. When one injects electrons into such a pattern of concentric,
electromagnetic shells, one finds distances that electrons can occupy. One also finds distances
through which electrons can pass, but which they cannot occupy.
Again at first glance, it would appear that such a situation must violate the conservation
of matter/energy. How can the total quantity of information increase if there has been no net
alteration in relative motion so as to produce such an increase? As incomprehensible as it might
appear at first, this can happen.
Let us use the structure of Einsteinian relativity as our frame of reference. Special
relativity dealt with purely linear relationships, although it often spoke in spatial terms. Thus far,
this book has dealt with purely linear relationships, and has proposed a conceptual alternative to
special relativity.
However, Einstein recognized that relativity must deal with angular (rotational)
relationships as well as linear relationships. He further believed that something would happen
when he applied the principles derived in purely linear special relativity to angular relationships:
He would derive solutions to additional problems that had never before been solved.
In this chapter, I am transitioning from the purely linear nature of special relativity to the
angular nature of general relativity. As Einstein anticipated, this shift in the perspective of the

118
question allows us to solve problems that cannot be solved solely with reference to linear
perspectives. The existence and the mathematical nature of electron shells is one such problem.
The principle of qualitative transformation is absolute from within the purely linear
perspective. The principle remains absolute with regard to any relationship between any two
particles, regardless of the manner in which either may be moving relative to other particles.
However, either or both of these particles may be involved in orbital relationships with other
particles, if not with each other. My alternative to general relativity requires a minimum of three
interacting particles.

A. RESONANCE ALTERS THE INVERSE SQUARED LAW

Resonance can result when we apply the absolute, universal principle of qualitative
transformation to situations involving at least three particles. Two or more particles are involved
in an orbital relationship with each other. The third particle experiences this orbital relationship
from its perspective. However, its relationship with the orbiting pair is linear rather than orbital.
The resonance that the orbital relationship produces allows the quantity of information present in
the relationship to violate the inverse squared law, to increase in quantity above the quantity that
the inverse squared law would predict. In such instances, field is experienced as being stronger
than predicted.
How can we reconcile such increases with the conservation of matter/energy? We must
add another qualifier to the concept of conservation itself. Conservation can be said to take place
if all apparent violations of conservation are fully reversible, and if there are built-in limitations
that dictate that such apparent violations can proceed only so far without necessarily reversing
themselves. If both of these conditions are met, we can say that the conservation of necessary
results is maintained, even though there are obvious short-term violations.
We can say that resonance produces short-term violations, but does not violate the
conservation of necessary results. In doing so, resonance adds complex considerations to the
simplicity of relationships as predicted by the purely linear principle of qualitative transformation
in the third chapter. These complex considerations manifest themselves within what we consider
to be the structure of matter.
Yet we will discover that even this is an insufficient statement of the conservation
principle. We will note that many of these violations, particularly those we associate with the
stable structure of matter, are rarely fully reversed. They add to the mass and charge of the
universe. Given that this is the case, how seriously should we concern ourselves with the
conservation principle? I have no answer for this question. It may be that conservation, like
energy, is the question rather than the answer.
The phenomena that resonance produces are all experienced as being structural in nature.
Resonance can result from the rotation of the nucleus relative to the surroundings of the atom, or
from rotational relationships of particles within the nucleus relative to the perspective of the
nucleus as a whole. This chapter focuses on the structural implications of resonance.
We know that all nuclei rotate extremely rapidly, both with regard to the perspectives of
nearby atoms and with regard to the various perspectives of the individual atoms electrons. All
rotational relationships involving the atom produce structural phenomena. Whether it be the
rotation of the nucleus as a whole, or the rotational relationships within the nucleus, all such

119
rotational relationships will produce apparent violations of the conservation of matter/energy.
These violations produce and/or alter structural relationships.
Resonance produced by rotational relationships of or within the nucleus might explain
five phenomena: Atomic mass is experiences as being greater than the sum of the masses of the
constituent protons and neutrons. Certain isotopes have radioactive half-lives. Atoms have
electron shells. Electrons orbit the nucleus in a cloudlike manner. Resonant relationships extend
beyond the atom itself so as to also affect crystalline and molecular structure.
Once we have derived the principle of resonance and have predicted its implications, the
key task will be to attribute each resonance-based structural characteristic to its causative
resonance. As we explore larger and more complex structural relationships, we can expect to find
phenomena that are based upon mixtures of causative resonances.

B. THE PRIMACY OF ANGULAR RELATIONSHIPS

No structured universe could be based upon purely linear relationships. The only
structures that could be based upon such relationships would be purely explosive or implosive.
While it appears that the earliest period of the Big Bang must have involved momentum that was
entirely linear, the development of structural organization required the emergence of angular
momentum.
For the purposes of this chapter, I merely note the following: All explosions that are
familiar to us are linear in their nature. Even asymmetrical explosions such as those resulting
from shaped charges are linear. We can explain the asymmetry of a shaped charge from within a
conceptual framework that considers only linear phenomena. If the Big Bang was initially purely
linear in nature, then how can we explain the emergence of angular momentum? I have no
answer.
Resonance exists and assumes its importance because angular momentum is more
fundamental to the stable structure of our universe than is linear momentum For structural
organization to have become stable and enduring, attractive forces had to exceed repulsive forces.
Otherwise, the matter/energy comprising the universe would have continued to fly apart, creating
nothing resembling structure.
However, a surplus of attractive forces over repulsive forces created a new problem:
What would prevent these forces from causing the universe to implode? This problem was solved
by the development of both (1) angular momentum and (2) the perspectives from within which
such momentum could be defined.

C. RETAINING ELEMENTS OF CLASSICAL PHYSICS

Until now, I have attempted to eliminate certain of the basic statements of classical
physics. I have attempted to develop a theoretical structure that is entirely experiential and
relativistic. I have attempted to deny the existence of any essence that attaches itself to particles,
and hence the existence of matter that possesses essence. I have also attempted to deny the
existence of any geometric perspective that is not entirely relativistic. Although methodologically
this is probably the best way to reduce physics to its simplest and most elemental level, this is also
an effort that may lead to some apparently necessary restoration of certain basic statements of

120
classical physics. In stating this to be the case, I am not arguing in favor of this restoration. I am
merely acknowledging my dependence upon two statements that are not entirely relativistic. In
doing so, I may merely be acknowledging the inadequacy of my own intellect. Einstein was
unable to achieve a purely relativistic pattern of thought. Although I may have moved closer to
pure relativity, I, too, have been unable to achieve pure relativity.
There are two such statements that may need to be restored so as to develop a complete
and credible theoretical structure. The first such statement is that particles do, in fact, possess
inertial mass. This is a quality that all particles apparently possess, independent of our interactions
with them. In restoring inertial mass, it is necessary to draw a clear and strong distinction between
inertial mass and gravitational mass.
Classical physics did conceptually distinguish between inertial mass and gravitational
mass, then blended the two together into a single, more general concept of mass. The reason for
this blending was that physicists concluded that inertial mass equaled gravitational mass. If one
went up by one kilogram, the other went up by the same kilogram. If the two retained a constant
relationship with each other, then they must be the same phenomenon.
Inertial mass is the ability of an object to resist alteration of its direction and velocity of
motion. Gravitational mass is the ability of this same object to attractively interact with other
bodies having mass. Double the inertial mass of an object on a wheeled dolly and one doubles the
effort that must be required to accelerate it across the floor. Double the gravitational mass and one
doubles the weight of the body, its gravitational interaction with the earth.
Classical physics equated one with the other because the two were thought to equal each
other. In the discussions that follow, I argue that they are not always equal. When resonant
transformations take place, gravitational mass can exceed inertial mass. Inertial mass does not
change. At the same time, however, gravitational mass increases. Therefore, we must distinguish
much more carefully between inertial mass and gravitational mass.
I am accepting inertial mass as being an essence, a quality of the particle itself. As best I
can determine, inertial mass is constant, absolute, and immutable. This being the case, it is always
experienced as being constant. It participates in events involving information, but it is not itself
information. It is only a quality of particle. Gravitational mass is purely relativistic and
experiential in character. Gravitational mass has no constant essence. It is information and only
information. It interacts with particles, but is not a quality of particles. It merely uses particles as
its mirrors to reflect itself back and forth between particles.
It is possible that later theoretical work will be able to restore the absolute equality of
gravitational and inertial mass. I hope that this will be the case. For now, I am conceding. I hope
that my concession is in error. However, I must acknowledge that I have encountered two
stumbling blocks that I have not overcome. Rather than allow these stumbling blocks to halt my
effort, I choose to acknowledge their existence. It is better to openly acknowledge the existence of
stumbling blocks than it is to paper them over, or hide them in linguistic convention. The second
stumbling block follows.
The second statement of classical physics that I may be forced to accept is the existence
of some absolute perspective for angular momentum. This perspective may be a quality of various
areas within the universe, and may not be a universal constant. I accept this with much less
certainty than I accept inertial mass. However, it may not be possible to explain the apparently
stable structure of the atom in all known settings without accepting at least the possibility of such
a perspective. However, there definitely is no absolute perspective for linear events.

121
The existence of inertial momentum as a property of particles places us inescapably in the
gravitational perspective. Given that inertial mass is absolute and immutable, it may be that
inertial mass has the ability to establish some absolute perspective from which to measure angular
momentum. If this is the case, then inertial mass anchors the three dimensional coordinate system
in much the same way as did Descartes aethereal vortices. This suggests that there is an absolute
perspective for every locale, but no absolute perspective that governs the entire universe. The
average inertial perspective of the all inertial mass in a locale establishes the angular perspective
for that locale.
It should be noted that both of these statements apply to mass only. They do not apply to
electromagnetic phenomena in any way. Electromagnetic phenomena remain purely experiential
and relativistic. It does not appear that they apply to gravitational mass either, although
gravitational events involving inertial mass always involve them.

D. DEFINING THE ANGULAR PERSPECTIVE

For angular momentum to exist, the single dimensional line of interaction connecting two
particles must be observed to rotate relative to that background against which angular momentum
is measured. For example, we say that the sun rotates on its axis relative to the perspective of the
fixed stars that constitute its environment. We can say this because we observe the suns
network of single dimensional perspectives as rotating in relative unison against the suns
background. In the context of the hypothesis just stated regarding the existence of a localized
absolute perspective for angular momentum, the background is all of the localized inertial mass,
averaged into a collective inertial perspective. Stars hundreds of light years away are included in
the localized mass.
We say that the earth rotates around the sun relative to this same environment. If it was
not possible to establish such a perspective, then we would be unable to explain why the earth
does not fall into the sun. In the absence of definable angular momentum, the solar system would
implode into the sun, the forces of gravitational attraction being far greater than any forces of
electromagnetic repulsion. In turn, the sun would implode into itself.
The same is true about the rotation of the nucleus of an atom. If the attractive forces
holding the nucleus together were less powerful than the repulsive forces that might blow it apart,
then the very existence of nuclei would be an extremely unstable phenomenon if even possible at
all. At the same time, however, the nucleus would implode if relative angular momentum failed to
prevent such implosion. Angular momentum could not exist in the absence of inertial mass.
Because of the extremely high rate of angular rotation of the nucleus relative to its
environment, we can say that the adjacent atoms are like the fixed stars are to the sun. Regardless
of how rapidly or turbulently we may perceive these atoms as moving relative to our perspective,
they are like fixed stars relative to the nucleus of any single atom.
Electrons rotate around nuclei much like the earth rotates around the sun. There are no
repulsive forces to prevent electrons from falling into the nucleus. Angular momentum relative to
the background perspective established by the nearby atoms is all that prevents the atom from
imploding.
There are two kinds of orbital events that we can study. The first is the rotation of a body
around its own internal axis relative to that bodys environmental perspective. Both the earth and
the sun rotate around their internal axes. The second kind is the orbital rotation of a second body

122
around this central body. The earth rotates around the sun. In any atom, electrons orbit around the
nucleus.
In purely gravitational contexts, there is no difference between the two kinds of orbit. As
was noted in the previous chapter, both kinds of orbit actually employ the same kind of rotation.
Mass rotates around a central point, the center of mass, regardless of whether the mass consists of
a single sphere or of two spheres held in orbital relationship so that they orbit around a central
point. Note that this commonality applies, in a practical sense, only to gravitational relationships.
When we discuss electromagnetic relationships, we can reasonably say that there are two
different kinds of orbital relationship. This is because momentum is conserved in the gravitational
perspective rather than the electromagnetic perspective. In a practical sense, we can distinguish in
principle between the orbiting of a nucleus around its central axis, and the orbiting of an electron
around the nucleus of the atom. While it is true that the electron does have a tiny mass, its charge
is vastly greater and vastly more important than its mass. In terms of its environment, this means
that the electrons lines of interaction involving charge are vastly stronger than its lines involving
mass.
While the tiny mass of the electron will cause the mass of the nucleus to wobble very
slightly, this minute wobble is so small as to have no practical significance. For all practical
purposes, we can consider the electron as orbiting around a nucleus that maintains a fixed position.
We are able to make a similar statement about the structure of the proton. A proton consists of
two particles, a neutron plus a positron. Because it has so little mass, the positively charged
positron orbits around the neutron with which it is paired within the proton.
In the fourth chapter, discussions of orbital mechanics focused on gravitational
relationships within the solar system. This chapter focuses on both the gravitational and
electromagnetic relationships within the atom, particularly as qualitative transformations affecting
these relationships are altered by resonance.
As we proceed with this analysis, we must consider three governing principles. The first
is the inescapable importance of angular momentum as being the principle that enables the
existence of stable structure within the universe. The second principle is that momentum is
conserved within the gravitational perspective, and only within this perspective.
The third principle is the single dimensional premise. Because all interactions must be
single dimensional, all interactions must be defined as being particle-to-particle. They cannot be
defined as being particle-to-composite-body. The nucleus is such a composite body. When a
particle interacts with an entity consisting of more than one particle, an entity such as a nucleus, it
must interact with each of the constituent particles individually.
Consequently, each such individual interaction must be considered. If the rotation of a
nuclear particle around the axis of the nucleus causes the distance between this nuclear particle to
oscillate, then we must consider the practical implications of this oscillation.
Although the third principle was implicit in the spatial concept of field that preceded the
single dimensional premise, the single dimensional perspective makes the third principle an
inevitable and inescapable imperative.
This chapter is primarily concerned with how our universe establishes and maintains
stable structure. The same principles that assure stability can also be used to explain certain
structural instabilities, radioactive decay in particular. These instabilities are actually
manifestations of the principles that make our universe extremely stable and predictable. The fact
that we know this stability to be overwhelmingly the rule makes it extremely difficult to postulate

123
a universe that is entirely relativistic and interactive. It may well be impossible to use relativity
and interaction to explain this extreme degree of stability.
What would happen if we claimed that this stability does, in fact, result from phenomena
that are entirely relativistic and interactive? The discussion that follows is based entirely upon
relativity and interaction. In considering these arguments, we must bear in mind that there are vast
differences in the concentrations of matter at varying points within our universe. The
concentration of matter in the core of the sun is vastly different than the concentration of matter in
space one light year away from the sun. This vast difference makes it exceedingly difficult to
justify the arguments that follow. In the following argument, I am attempting to reunite the
concepts of inertial and gravitational mass, even though I have doubts as to whether this is
possible.
With regard to purely linear relationships, there are two terms that we can define in
absolute terms: the distance between two particles, and their instantaneous mutual velocity. There
is one term that we can never define at all: the absolute movement of either of the particles. The
very idea that there can be any such absolute movement violates the mathematical perspective of
relativity. Neither linear nor angular momentum can ever be defined in absolute terms. Their
existence must result from the interactions of the two particles with a vast network of other
particles.
The relationship between any two particles can be defined in terms of a single dimension,
and requires no reference to any perspective outside this one single dimensional perspective. The
reason why it is possible to define both instantaneous distance and instantaneous mutual velocity
in absolute terms is that there is so little that requires definition.
Defining angular momentum requires that we define an additional perspective. This
additional perspective, one from which we can measure the rate of angular rotation, can never be
defined in any terms other than relative terms. Within the context of any given spatial region,
however, this perspective is shared among all atoms that share the particular spatial region. When
one moves from one spatial region to some other spatial region within the Big Bang, one should
expect to transit from one angular frame of reference to another angular frame of reference. In this
sense, the localized regions have much in common with the localized regions of Descartes
aethereal vortices.
Such frames of reference function because they give both inertia and momentum to both
linear and angular motion. The single dimensional perspective that two particles share is
sufficient to define their mutual velocity, their mutual acceleration/decelerations, and the strength
of their shared information that each particle experiences at any given time. However, no such
single dimensional perspective can confer either inertia or momentum. Inertia is the ability of a
body to resist changes in its linear motion. Inertia is a function of mass, not of momentum.
Momentum is the product of inertial mass times some relative velocity
Inertia and momentum exist because each particle participates in a vast number of single
dimensional relationships with other particles. This being the case, any mutual
acceleration/deceleration along any one line of interaction must produce
accelerations/decelerations along this vast network of other lines. If any one line of interaction
could act completely independently of all other such lines, then the transformations taking place
along that line would involve no inertia or momentum.
Lacking inertia or momentum, conservation as mandated by the principle of qualitative
transformation could not take place. The very existence of conservation depends upon the
existence of this vast network of other linear relationships. Altering momentum requires that we

124
alter the mutual velocities of particles among a vast number of interactive lines. The total gain or
loss of gravitational momentum will be reassigned to or taken from other lines of interaction.
When an electromagnetic transformation takes place along any given line of interaction,
altering mutual momentum, the combined total of the mutual gravitational momentum and the
total quantity of electromagnetic information present along the line will change. The
electromagnetic transformation will produce a corresponding gravitational transformation
affecting gravitational momentum but not gravitational inertia. We know that we live in the
gravitational perspective, but we cannot explain why. We know that inertial mass exists as a
phenomenon that is distinguishable from gravitation, but we can only explain this phenomenon as
being a product of gravitational interactions.
We can guess as to what it means to live in the gravitational perspective. When we say
that we live in the gravitational perspective, we mean that the combined total of the gravitational
lines of interaction remains fairly constant. It is this combined constancy for all lines of
interaction collectively that gives inertia to mass. There is no such constancy in electromagnetic
relationships, which is why we do not live in an electromagnetic perspective.
The problem with this explanation is that it states that inertial mass should be a function
of the presence of gravitational relationships. Very weak gravitational relationships should
produce weak inertia. Inertia could not be a constant. The structure of matter should vary greatly
depending upon the gravitational context of the particular atom.

E. RESONANT TRANSFORMATIONS

When developing and analyzing physical principles, it is best to begin with the simplest
possible explanations and illustrations. Once we have acquired some understanding of the
implications and applicability of a given principle, we can then seek to find those situations to
which it best applies.
So let us begin with a simple illustration of principle, and not concern ourselves with the
validity of the particular cause-effect relationship. Once we grasp the meaning of the illustration,
then we can seek proper cause-effect relationships.
This illustration most likely applies to relationships between two or more nuclei.
Literally, it probably doesnt apply to the relationship between an electron and a nucleus. The
electron-nucleus relationship is most likely established by the internal orbiting of protons within
the nucleus. However, the principle nonetheless remains the same.
For purposes of explanation and illustration, let us assume the simplest possible situation.
Let us consider a situation in which an electron is orbiting around a nucleus. The nucleus is itself
orbiting rapidly around its own internal axis. The electron cannot interact with the nucleus as a
whole. Instead, the electron must individually interact with each proton and neutron within the
nucleus.
Let us further assume that the electron is maintaining a circular orbit at a constant
distance from the center of the nucleus. Let us further state that the orbital rate of the electron
around the nucleus is far less than the orbital rate of the nuclear particles around the nucleus
internal axis. This being the case, the electron always experiences most of the nucleus internal
orbit.
Under such circumstances, it is impossible for the electron to maintain a constant distance
from any of the protons or neutrons unless the electron assumes a fixed position directly above

125
one of the nucleus poles of rotation. Because the electron orbits the nucleus, it cannot do this.
The distances between the electron and each nuclear particle will continually oscillate. The
oscillation of these distances is unavoidable.
Because the electron maintains a constant distance from the center of the nucleus, the
distances from the electron to the individual nuclear particles will oscillate at an extremely rapid
rate, but within a very narrow range of distances. This very narrow range of distances is a
function of the very small diameter of the nucleus.
These oscillations of distance will result in oscillating qualitative transformations.
Information and mutual velocity will be transformed back and forth, each into the other, although
there will be not net change in mutual distance over a period of time. The average mutual distance
will remain constant.
These oscillations will impose waveforms onto the information being transmitted back
and forth between the electron and each nuclear particle. Any situation in which a waveform
moves back and forth between two particles can, under the proper circumstances, produce
resonance. The single dimensional perspective connecting the two particles, a perspective along
which information continually flows at the constant velocity of the speed of light, provides a
context where such circumstances can occur. To achieve the proper circumstances, one only need
reach some resonant distance between the electron and the nuclear particles.
Resonance requires a continuing input of some oscillation into the flow of information.
The point of oscillation is the nuclear particle that is orbiting within the nucleus. Resonance will
take place anytime the transmitted oscillation is received back at the point of oscillation, and in
phase with the continuing input of oscillation. When this in-phase relationship exists, the strength
of the oscillation presently being emitted will be the product of the actual present input itself,
which is the mutual velocity of the nuclear particle toward the electron, times the strength of that
oscillation which is presently being received at the point of oscillation.
Obviously, there will be two opposing transformations. There will be one transformation
while the nuclear particle is moving toward the electron. There will be an opposing
transformation while the nuclear particle is moving away from the electron. The results will not
cancel. The first relationship can theoretically compound to infinity. The second can only
inversely compound from one to nearly zero. The first will be experienced as being of far greater
importance than the second, overriding any significance that the second might have.

F. ATOMIC MASS

Resonant transformations should produce or contribute to five familiar phenomena.
These are phenomena that have never before been tied together by a single, unifying principle.
The first of these concerns the masses and charges of atomic nuclei as experienced from outside
the nucleus. A mass that rotates relative to its environment should be experienced as having both
mass and charge greater than the sum of its constituent particles. Rotation itself adds to
experienced mass and charge. This is due to the interactions of nuclei with other nuclei.
There is some slight degree of resonance at most distances. However, there are only
certain distances at which this resonance will be more than slight.
In virtually all situations, nuclear rotation will produce some positive and some negative
compounding. However, this compounding will not proceed very far before it is largely cancelled

126
by its opposite. Consequently, resonant compounding will add predictably but relatively slightly
to the externally experienced mass and charge of all rotating nuclei.
Resonant compounding will take place between nuclei, but will usually proceed only
within a statistically predictable range. The strengths of individual interactions will build in
intensity upwards from that predicted by the inverse squared law, only to collapse back toward the
that intensity predicted by the inverse squared law, as positively compounded information
encounters negative transformations. Similarly, negatively compounding information will
encounter positive transformations. The two do not completely cancel each other. As noted in the
previous section, the positively compounding transformations will outweigh the negatively
compounding transformations. The net result is a gain in experienced mass and charge.
A second statistical phenomenon further adds stability to this situation: What we
experience as nuclear mass is the product of billions of such relationships, the phasings of which
are random relative to each other. Consequently, the inconstancy that appears within each
particle-to-particle relationship is experienced in the statistical aggregate as adding to atomic mass
and charge. Because of the extremely short time periods involved in nucleus-to-nucleus
transformations and the extremely large numbers of such transformations, this added atomic mass
and charge will be experienced as being extremely constant.
Nuclei will be experienced as having greater mass and charge than the sum of their
constituent particles would indicate. This deviation from the sum of constituent particles results
from dynamic, transformational interactions, not from the structure of the nucleus itself.
Regardless of the cause, we experience the result as being structural in nature. We consider
experienced mass to be a characteristic of the structure of matter, not as being merely a matter of
experience.
This increased mass and charge that results from internuclear resonance suggests that it
should not be necessary to employ the concepts of the third and fourth kinds of field, the weak and
strong nuclear. These kinds of field are used to explain why the masses and charges of nuclei
exceed the sums of the masses and charges of the constituent protons and neutrons, and the net
loss of nuclear mass and charge that results from nuclear fission. The principle of resonant
transformation can be used to explain these losses: Any fissile event that reduces internuclear
resonance will produce an experienced loss in the total masses and charges of the resulting nuclei.
Eliminating the weak and strong nuclear fields resolves a question of mathematical
context: If we believe these kinds of field to be inherent to the nucleus itself, rather than residing
in any of its constituent particles, then in what situations do we consider the nucleus itself rather
than its constituent particles as being the fundamental unit that establishes mathematical context?
We think of the weak and strong nuclear fields as being like a kind of nuclear glue, intrinsic to the
nucleus as a whole and not to the individual particles that comprise the nucleus. The idea of
nuclear glue is indeed a sticky, sloppy one!
Belief that the weak and strong nuclear fields exist independently of the masses and
charges of the individual protons and neutrons creates serious problems defining mathematical
context. We can no longer define the nucleus as being a composite entity whose sole relevant
constituent parts are subatomic particles. This is because the weak and strong nuclear fields are
not themselves assigned to particular particles, but are instead assigned to the nucleus as a whole.
However, there is one mathematical context that we could define that will allow us to
state that subatomic particles themselves establish the sole mathematical context: We can define
what we have previously hypothesized to be additional kinds of field as instead being

127
manifestations of resonant transformational phenomena. This should simplify the mathematical
contexts that we use for subsequent analysis.
Why should increasing the number of particles in the nucleus produce a disproportionate
increase in the added mass and charge of the nucleus? Perhaps it is because larger nuclei
generally have larger radii. For the purpose of explanation, I will assume that all neutrons and
protons engage in essentially the same very simple orbit around the orbital axis of the nucleus. It
is highly likely that this is seriously oversimplified.
Because of the larger radius, each protons or neutrons orbit around the nucleus internal
axis should produce a greater degree of transformation than would a similar orbit of less radius
within a smaller nucleus. Because of this increase radius, the degree of transformation per
transformational event will be greater.
The increase in experienced nuclear mass that results from resonant transformation is a
function of statistical probabilities that favor increase per transformational event over the number
of such transformations. A larger nucleus with a greater degree of transformation per
transformational event will therefore experience a greater statistical probability that it will be
experienced as having greater mass and charge. Consequently, the resonance-produced
experience of increased atomic mass should be proportionately greater among larger nuclei than
among smaller nuclei.
Outside the individual atom, the experiences of both gravitational and electromagnetic
field strengths are affected by resonance, although it is likely that gravitational experience is
affected to a greater degree than is electromagnetic experience. This is because protons consist of
electrons with positrons orbiting around them. The resonant effects of the orbiting of the positrons
should partially offset the resonance produced by the orbiting of the proton around the axis of the
nucleus.
I have very little idea as to what this should mean with regard to the conservation of
matter and energy during nuclear fission. Fission should reduce both experienced mass and
experienced positive charge. Therefore, we cannot explain the energy output of fission as being
solely the result of the loss of experienced mass. What happens to the positive charge that we no
longer experience? Furthermore, we are no longer dealing with the loss of some essence, matter,
that is being transformed into energy. We are merely talking about the loss of the intensity of
experienced mass and charge.
However, there is one possibility that we can hypothesize, one that unfortunately relates
to the design of nuclear weapons: half-lives are manipulable to some degree. If one can alter the
distances between nuclei so that internuclear resonance increases, then this increased resonance
should reduce half-lives. This reduction in half-life would reduce the critical mass of the bomb,
allowing a smaller mass of fissile material to produce a greater output of neutrons.
I would not mention this possibility if I was not already convinced that we have been
unknowingly doing this since the 1950s. Therefore, I see no possibility that stating the principle
will advance weapons development.

G. ATOMIC HALF-LIVES

Stating that resonant transformations statistically increase the experienced masses of
nuclei does provide us a clue as to the cause of the half-lives of radioactive isotopes. Half-lives

128
are obviously a probabilistic phenomenon. So is the increased mass and charge of rotating nuclei.
Resonant transformations suggest that these two phenomena share a common cause.
According to the principle of resonant transformation, nuclear rotation increases the
masses and charges of nuclear particles as experienced from outside the nucleus. Such rotation
does not cause any increase at all in either mass or charge as experienced from within the nucleus.
Consequently, this increase in experienced nuclear mass can contribute to disintegration of the
nucleus. This increase cannot contribute to nuclear cohesion.
Because the increase in experienced masses and charges of nuclear particles is
probabilistic, it is possible for these experienced increases in mass and charge to momentarily
compound out of control. Perhaps this runaway compounding could take place from two opposing
directions simultaneously, thereby pulling the nucleus apart. Or perhaps one particle, a neutron or
an alpha particle, experiences greater compounding from outside the nucleus than does the
remainder of the nucleus.
When correlated with the extremely high rate of nuclear rotation, such runaway
compounding must be statistically extremely rare, even for the least stable elements and isotopes.
Statistical probabilities ensure such rarity. However, when one considers the large numbers of
atoms that can individually be affected by this statistically rare phenomenon, one can reasonably
predict that, proportionate to the vast numbers of atoms considered, a certain comparatively
minute number will experience this phenomenon during any given period of time.
When the structure of the nucleus of a given isotope renders it subject to probabilistically
induced division, then one can predict that this isotope will have a half-life. Proportionate to the
given number of such nuclei in existence at any given time, a given percentage should be expected
to divide within any particular unit of time.
While nuclear half-lives cannot, in and of themselves, be stated to prove the hypothesis
that resonant transformations account for the increased masses and charges of rotating nuclei, the
half-life phenomenon itself is certainly consistent with this hypothesis.
Previously, we have considered the half-life phenomenon to be related to inherent
instabilities within the nucleus itself, and not as relating to the nucleus interactions with outside
bodies. This internal instability was assumed to result from some failure of the nucleus glue to
hold the nucleus together.
The hypothesis of a nuclear glue was never really satisfactory. The added mass and
charge of the nucleus was attributed to this glue, meaning that the added mass in particular
could be experienced at a distance. At the same time, however, it is usually assumed that the
primary significance of the two nuclear fields, the weak and strong nuclear, lies solely within the
nucleus.
It the masses of nuclear particles causes these particles to be attracted toward each other,
they why would any nuclear glue be required to hold the nucleus together? Clearly, a
nonrotating nucleus should require no glue. If glue is needed, then it is needed only because
the nucleus is rotating. We need glue to counteract the effects of centrifugal force resulting
from nuclear rotation.
Because the nucleus is rotating, we must consider the implications of resonant
transformations. Because these implications include the experience of added mass and charge and
nuclear half-lives, we should no longer have any reason to hypothesize the existence of the third
and fourth kinds of field, the weak and strong nuclear.


129
H. THE SIMPLE HYDROGEN ATOM

Thus far, this chapter has dealt with phenomena that are produced by the rotation of the
nucleus as a whole. However, there is an obvious problem when we attempt to use rotation of the
entire nucleus to explain electron shells. Rotation of the nucleus as a whole requires the presence
of at least one proton and one or more neutrons. The simplest hydrogen nucleus consists of a
single proton. How can a single nuclear particle rotate so as to produce resonant compounding,
thereby producing electron shells?
Who says that the proton is so simple that it cannot produce resonant compounding? If
electron shells are produced by resonant compounding, then there must be something about the
individual proton that produces this compounding. If a proton alone in a nucleus can do this, then
shouldnt the individual protons present in more complex nuclei be able to accomplish the same
task? Shouldnt all electron shells be caused by the same phenomenon?
The proton is not a single, unitary entity. It is instead the fusion of a positron and a
neutron. When a proton decays, it emits a positron and becomes a neutron.
We are speaking, of course, from the gravitational perspective. Virtually all of the mass
is in the neutron, along with none of the charge. By contrast, the positron is like an electron in that
it has a lot of charge and very little mass. Because we live in the gravitational perspective, we see
events from the perspective of the neutron rather than that of its mated positron. We speak of the
proton as emitting a positron and becoming a neutron.
Both the neutron and the positron orbit around their common center of mass. Because
virtually all of this mass is located in the neutron, the neutron itself effectively functions as being
the center of mass. We can reasonably ignore the orbit of the neutron around this common center
of mass, and simply state that the positron orbits around the neutron. Practically speaking, this
means that the resonant transformations that take place as a result of positron orbit are entirely
electromagnetic in nature. Electron shells have no gravitational component.
Furthermore, we should not expect electron shells to be latitudinally structured. If
electron shells were produced by uniform rotation of protons around a central axis, we should
expect to find the greatest resonance along the equator of rotation and no resonance at all along the
alignment of the two poles. Consequently, the latitude of ones momentary position should
determine the strength of resonance. However, orbiting protons can orbit randomly relative to
each other. This means that, with the exception of hydrogen, electron shells should not be
latitudinally structured.
There is something else that we can say about the proton: The field that binds the
positron to the neutron is entirely gravitational. Furthermore, this gravitational bond is fairly weak
and its rotational rate is surprisingly slow. This is because the positron has so little mass. There
are two conceivable ways by which we could measure this rotational rate.
First, we can measure the diameters of the simplest, lightest hydrogen atoms electron
shells. Because the nucleus consists of only one proton, we can reasonably assume that we are
measuring the rotational rate of the positron itself. However, we must acknowledge some
possibility of variability in this rate, as noted later.
What we need to determine is the location of each of the concentric shells that the
electron cannot occupy, the perfectly resonant distances. The speed of light divided by twice one
of these distances corresponds with some multiple of the positrons orbital rate.

130
The second method is more difficult to achieve, at least initially. However, it will give us
more accurate results. If will probably be necessary to attempt the first method initially, then use
this data to estimate the orbital rate. With this estimate in hand, we might then be able to attempt
measurement by the second method.
The second method would be to use electromagnetic radiation to induce proton decay,
resulting in positron emissions. Electromagnetic radiation that has a frequency corresponding
with the orbital rate of the positron should induce proton decay. However, there is one very
obvious reason why we will be unable to directly achieve this frequency. It is the same reason
why this frequency is rarely experienced on our planet, and consequently why proton structure is
very stable throughout the universe with occasional rare exceptions.
The frequency that will most efficiency induce proton decay is the one that will be most
readily absorbed into the neutron-positron relationship. Absorbed in small amounts among vast
numbers of neutrons, this absorbtion will very slightly alter the gravitational relationships between
neutrons and positrons by altering the positrons rate of rotation. This alteration will be so minute
as to have no measurable effects. Because the frequency is absorbed, it will not be transmitted to
other atoms.
Within the perspective of our earthly matter, there should be no frequency of
electromagnetic radiation that is absorbed more readily or even as readily. This is a frequency
that will be absorbed by nuclei, quite independently of any relationship between electrons and
nuclei. The extreme ease with which this frequency is absorbed protects stability of atomic
structure.
I. ELECTRON SHELLS

The principle of resonant transformation produces a corollary to the resonant increase in
nuclear mass and charge: electron shells. This is because there is a direct correlation between
distance from the nucleus and the degree of resonance that the electron will experience.
At most distances, electrons will experience little if any resonance. There is also a set of
regularly spaced distances that electrons cannot occupy because these distances are characterized
by strong resonance. Because of the strength of this resonance, electrons will be pulled back
toward the nucleus, and away from the positively compounding resonant distance.
When we discuss the existence of shell-like structures, what we are referring to is a set of
concentric, layered, onionlike distances from the nucleus. Whatever the particular rotational
relationship that dictates the existence of shell-like phenomena, the nature of all such shell-like
relationships is essentially the same. Each shell-like layer that can be continually occupied is
separated from the next such layer by another shell-like layer that cannot be continually occupied.
The existence of these concentric shells gives structure to each nucleus relationship with
the atoms electrons. Most of the time, a predictable number of electrons will be located in each
shell. Therefore, we think of electron shells as being structural in nature.
The principle of qualitative transformation and its resulting corollaries necessarily argue
that there can be no structure apart from interaction. In the absence of interparticulate interaction,
it is impossible to determine whether structure exists or not. If and when we find that particular
structural characteristics are predicted by a particular principle of interaction, as we do in the case
of electron shells, then we must postulate that what we experience as structure is, in fact, purely
interactive in nature. Structure does not affect interaction. Instead, it is interaction that creates
structure.

131
Shell-like phenomena are products of both distance and time. There are certain
predictably resonant distances. However, these distances will not resonate significantly unless
some particle is allowed to occupy such a distance for a sufficient period of time as to experience
substantial resonant compounding. We may speak of shells that can sustain continued occupancy
as existing without regard to time. However, the intermediating shells of resonant compounding
require occupancy for sufficient time as to pull the electrons out of resonance and into the next
closest shell that will allow continued occupancy.
Regardless of the degree to which the rate of electron rotation may change relative to our
perspective, changes that we experience as photon emissions or absorbtions, this rate will remain
very slow relative to the rotational rate of the protons within the nucleus. This situation is one in
which protons consistently rotate internally at a much faster rate than electrons rotate around the
nucleus. This disparity in orbital rates sets up a situation in which the relationship can become
resonant. If this disparity is orbital rates is coupled with a resonant distance, then the relationship
will become resonant.
No electron can continue to occupy any distance at which resonant compounding would
proceed without limit. Such distances are ones through which electrons can pass if they move
quickly enough, but at which they cannot sustain orbit. Given the degree to which unrestrained
resonant compounding can increase electromagnetic field strengths above those predicted by the
inverse squared law, no electron can enter a fully resonant distance with sufficient orbital
momentum as to sustain orbit at such a distance.
Let us attempt to graph the relationship between the distance of potential electron orbits
from the center of the nucleus, and field strengths that would exist at each of these distances if an
electron occupied each distance for an indefinite period of time. What we quickly discover is that
field strength at the various distances cannot be represented by an evenly progressing graph of the
inverse squared relationship.
Instead, we must superimpose an additional waveform onto this graph, a U-shaped
waveform generally proportional in its added strengths to the strength of the inverse squared law
at the various points along the graph. At regularly spaced intervals, the steep walls of the U will
reach toward infinity.
The sharp walls of the Us define a set of concentric shells that surround the nucleus.
Shells of occupancy exist because the sharp walls of these Us divide distances from the nucleus
into zones that can be occupied, separated by those that cannot be occupied. Although an electron
can pass through the walls of the Us, it cannot occupy these distances.
When an electron approaches the walls of the U, the rate and degree of resonant
transformation will increase sharply. If the electron was to attempt to occupy the zone of
maximum resonance, it would experience field strength as compounding toward infinity. At it
passes through this zone and enters the next electron shell, it experiences a zone where the
oscillating, previously positively compounded waves in the flow of information will encounter
negative transformations. These negative transformations largely cancel the positive
compounding just produced within the zone of maximum resonance.
Because electrons must pass through such experiences on their ways from one electron
shell to the next, we cannot assume that we can compute orbital energies merely by knowing the
distance from the center of the nucleus at which the electron is orbiting. The experience of
passing through the zone of maximum resonance should, in and of itself, alter orbital energies
independent of and in addition to the actual change in orbital radius.

132
We should not expect to find most electrons in a circular orbit at the center of the U. This
orbit is, in fact, the lowest energy orbit within the particular shell. Instead, we should expect
orbits to be oval or elliptical. Orbits go through the center of the shell, the zone of zero resonance,
while going back and forth between the zones of increasing resonance on both sides. The greater
the degree to which the orbit is oval or elliptical, the greater the energy value of the orbit.
Entering resonance toward the outer zone of the shell, the electron would experience
itself as being pulled back toward the center of the shell. The electron then enters the inner zone
of the shell, once again experiencing increasing resonance. This inner zone accelerates the
electron linearly along the side of the U, increasing its rate of rotation. This increased rate of
rotation is then lost as the electron once again enters the outer zone. The more oval or elliptical
the orbit, the greater the electrons rate of rotation around the nucleus. We cannot measure orbital
rate and then infer orbital distance based upon the inverse squared law.
However, this situation may help us to resolve a dilemma noted in the third chapter. I
suggested that photon emissions resulting from orbital shifts are caused by changes in the orbital
rates of electrons. I then noted that our experience with these emissions went contrary to what this
hypothesis suggests. Rather than decreasing as electrons move outward from shell to shell, photon
frequencies increase.
This explanation of the nature of electron orbital values within each shell might resolve
the dilemma. As electrons move outward from shell to shell, they also tend to move into orbits
that are more oval or elliptical. Such orbits are of higher energy value and greater rotational
frequency. Therefore, rotational velocities will become much greater than the inverse squared law
would predict.
Finally, the electron will move into an outer shell at such velocity and with such
momentum that the outer wall of the shells U will not be able to restrain it. When this happens,
the electron will depart the atom.
It is worth noting that the resonance that produces electron shells will affect electrons to a
greater degree than it will affect other nuclei. In part, this is due to the very low mass of the
electron. It is also because the particles in other nuclei are rotating around their own nucleis
internal axes. This rotation means that the distances of nuclear particles from the centers of other
nuclei will always be changing, at least with regard to their distances from most.
Because resonant transformation produces structure rather than emission, and further
because this structure contains predictable and definable discontinuities, we may be able to use
this conceptual situation to reconcile the mathematical continuities of relativity with the
discontinuities of quantum. We can hypothesize that the mathematical discontinuities we find in
the quantization of electron orbital energies are derived from the mathematical continuity inherent
in the principle of relativity, as reinterpreted by the principle of qualitative transformation.
Therefore, we cannot consider quantum and relativity as being co-equal in their statuses.
The reconciliation of relativity and quantum must consider relativity to be the more fundamental,
and quantum to be derived from relativity.

J. STANDING WAVES

This reconciliation of relativity and quantum is possible because two of my premises
differ from Einsteins. First, I argue that all fundamental relationships can be defined with
reference to a single dimension. Hence, there is no need to assign qualities to otherwise empty

133
points in space. General relativity need not assign values for the units of time and distance to
otherwise empty points in space. The fundamental concepts around which phenomena must be
defined do not consider space at all, with the exception of some possible absolute perspective for
angular relationships. The fundamental concepts consider only particles, interparticulate
distances, and mutual interparticulate velocities.
Second, I argue that field can be defined as being fundamentally emissive in nature.
Field information, both attractive and repulsive, is transmitted back and forth along the single
dimensional line that unites two particles into relationship with each other. What we consider
field structure is actually a manifestation of an interactive, emissive phenomenon.
Because field can be defined in terms of single dimensional emissions, and reflections of
this information back and forth between two points, then two kinds of waves can exist. One is the
wave of fluctuating information intensity that travels along a line back and forth between the
particles. The other is roughly analogous to a standing wave that can exist at various, regularly
spaced points along the line.
Assuming that the line is sufficiently long relative to the wavelength of the oscillations,
there will be points at which phasing will cancel, and other points at which phasing will
compound. Stationary, standing waves will exist at those points along the line where the phases of
different waves emitted at different times will meet in phase with each other, thereby
compounding rather than canceling each other. A stretched string of resonant length can make a
similar pattern of standing waves visible. The waves structure an otherwise straight string.
In the third chapter, I noted that traveling gravitational waves that we would experience
as being emissive cannot exist. This is because momentum is conserved in the gravitational
context. The conservation of gravitational momentum in no way precludes the existence of
nonmoving, standing waves.
I argue that these standing waves manifest themselves as structurally concentric zones
surrounding the spinning nucleus and its internally spinning protons. In the proton, positron
rotation around the neutron produces standing gravitational waves. However, these waves are so
weak as to be completely insignificant. However, rotation of the nucleus as a whole will definitely
produce concentric shells that other nuclei will find it difficult to occupy. These are the
gravitational waves that Einstein hypothesized and sought, although they neither what he expected
nor where he expected to find them.
Einstein was unable to find gravitational waves because he believed field to be
fundamentally stationary rather than a phenomenon that emissively travels relative to its source.
With the possible exception of the probability waves of quantum, previously hypothesized to
explain the otherwise inexplicable movement of photons, standing waves have long been known
to exist only where a moving wave exists. The existence of the standing wave presumes the
existence of a set of moving waves.
Because these standing waves do not move relative to the two interacting particles, we
experience them as being structural rather than emissive in their nature. We see them manifested
in the form of concentric shells that we understand to be structural.
When we say that interaction manifests itself as structure, we are saying that the
outcomes are so predictable and that this predictability is so stable that we know in advance what
will happen. When I talk about structure, I am no longer talking about matter as we have
previously understood it, as something distinguishable from interactions involving matter. Rather,
I am talking about interactions that are so predictably stable as to give structure to the events that

134
take place within our universe. When added to inertial mass, we experience this stability as being
matter.
Einstein was correct in predicting that gravitational waves should exist. However, I
suggest that he was incorrect as to their nature and significance.
Einstein believed that there was some clear delineation between structure and interaction,
and that structure affected interaction. I argue that structure is purely a product of interaction, and
that interaction frequently manifests itself as structure.
Einstein predicted that gravitational waves would be extremely weak and therefore
difficult to detect. The easiest to detect would be those emitted by rapidly rotating pairs of binary
stars. These pairs would create waves that would cause the values of the units of time and
distance to fluctuate. These fluctuations would be emitted outward from the rotating pairs, and
would weaken as their respective gravitational fields weakened in accordance with the inverse
squared law of field strength.
According to general relativity, gravitational fields assign values to the units of time and
distance at various points within them. These values project outward indefinitely in a Gaussian
field. Gravitational waves would take the form of ripples in the units of time and distance, ripples
spreading outward just as waves spread across the surface of a pond. As a wave passes through
the various points within the field, it will momentarily alter the values of the units of time and
distance that the field has already assigned to various points within it.
According to Einsteins prediction, the values of the units of time and distance will
oscillate within an extremely small range that might be detected over a distance of several
kilometers by an optical interferometer. The rate of oscillation will correspond with the rate at
which the binary stars rotate around their common center of mass, the rotation predicted by
Newtons orbital mechanics.
I argue that there is no theoretical difference between a pair of binary stars and a pair of
nuclear particles orbiting around an internal axis within the nucleus, although in a practical sense
there is a difference for the neutron-positron orbit within the proton. Nuclei consisting of multiple
particles merely consist of many such pairs. Einstein would have hypothesized an obvious
difference.
Einstein had retained the spatial theory of field structure: Field assigns qualities to
otherwise empty points in space. According to general relativity, gravitation assigned one set of
values for the units of time and distance for each such point in space. Each such point could have
only one set of values for these units. If this point is affected by several gravitational field
sources, as all points are, then the values of the units of time and distance at this point must be a
composite sum of all affecting field sources.
With regard to gravitational waves, this meant that such waves emitted by more than one
orbital pair could cancel. In the case of nuclei containing many such pairs, gravitational waves
could be phased so as to mutually cancel each other. This would take place if two pairs were
phased such that the gravitational wave emitted by each would be phased exactly opposite the
emissions of the other pair. There would be no net emission of gravitational waves because the
opposing sine waves of emission would cancel.
Although there might be no theoretical difference between a pair of binary stars and a
pair of nuclear particles orbiting around an internal axis within the nucleus, such phase
cancellation would produce a genuine difference. The fact that pairs of nuclear particles orbit
around their common axes would be of even less practical significance than would the orbiting of
binary stars. Only in the case of the deuterium isotope of hydrogen would there by any possibility

135
of practical significance, the deuterium nucleus being the nuclear equivalent of a pair of binary
stars.
In any case, Einstein never contended that gravitational waves would have any great
theoretical or even modest practical significance. In Einsteinian thought, gravitational waves
existed as a mere curiosity, potentially significant only if proof of their existence could be used to
validate his theory. Gravitational waves were the gravitational equivalent of electromagnetic
waves in a conceptual sense, but certainly not in any practical sense.
I suggest that the concept of gravitational waves finds its significance in what we
experience to be structure, rather than what we experience as emission. Because the principle
manifests itself as structure, it may have far greater significance than we have ever suspected. To
appreciate this significance, however, we must first understand how the interaction that Einstein
hypothesized would produce the experience of structure rather than the experience of waves.

K. THE CLOUDLIKE BEHAVIOR OF ELECTRONS

The fourth phenomenon predicted by resonance is the cloudlike nature of electron orbits.
Relative to the background within which the atom exists, and relative to our sense of time,
electron orbits precess with extreme rapidity. This has already been predicted in my discussion of
electron orbits within their respective electron shells. However, there is a second factor that could
also contribute to this cloudlike precession of electron orbits. Precession can be caused by both
the structure of electron shells, produced by internal rotation within protons, and the rotation of the
nucleus as a whole.
The discussion that follows has practical importance for electrons that may be orbiting in
entirely circular orbits within the center of an electron shell, should such an orbit be possible.
Even absent any relationship with electrons that might cause precession, the rotation of the
nucleus as a whole will cause precession. Nuclear rotation affects the orbits of all electrons, even
any that might possess an absolutely circular orbit.
The motion of the perihelion of Mercurys orbit, one of Einsteins key illustrations of
special relativity, is but a practically irrelevant example of a principle that has vastly greater
relevance to atomic structure.
Before proceeding with further discussion, I must note that we employ two different and
perhaps irreconcilable concepts of the significance of electron orbits. With regard to protecting
the space that an atom occupies, we say that its cloud of electrons repels other electrons,
particularly those electrons that are attached to other atoms. This means that nearly every point
surrounding the nucleus within the atoms occupied electron shells will experience the frequent
presence of an electron.
An electron cannot be in two places at once. Particularly in the case of the lighter
elements, a single electron must simultaneously protect numerous peripheral points against
intrusion by stray electrons, or by electrons belonging to other atoms. This means that the electron
must protect very numerous points that are significantly away from any circular or elliptical orbit
that might be defined as being fixed relative to the atoms background. Therefore, orbital
precession means protection. Protection cannot exist without rapid precession.
If a single electron is to cover such a wide range of peripheral points against intrusion,
then its orbit must precess so as to as to visit each of these points very frequently. Such precession
creates a cloudlike phenomenon that other electrons cannot penetrate.

136
Perhaps irreconcilably, chemistry then hypothesizes the existence of electron orbits that
are fixed relative to the atoms background, and attaches chemical bonds to these fixed positions.
This book does not attempt to reconcile the apparently irreconcilable. Once again, we are dealing
with a matter that will probably require supercomputer simulation. This book merely notes the
obvious contradiction between the present concepts of chemical bonding and the implications of
orbital precession.
Precession of orbit would be a likely means of achieving the cloudlike phenomenon that
protects atoms from intrusion by other electrons. When applied to noncircular electron orbits, the
principle implicit in the motion of the perihelion of Mercurys orbit clearly suggests that the
perihelions of elliptical orbits should precess relative to their fixed backgrounds. However, as this
chapter has just noted, the internal structure of electron shells is far more complex than the much
simpler gravitational attraction between the sun and Mercury. Simple prediction is probably not
possible. Again, we are dealing with a matter that will need supercomputer simulation.
In the case of Mercury, precession is caused by the alternating experience of differing
strengths of gravitational attraction. There is a second phenomenon that could also cause
precession: the rapid rotation of a central body. Because of resonant transformation, this rotation
causes the center of the central body to be experienced as being off center from its actual position.
If we were able to stand on the sun and view Mercury, we would see Mercury rising and
setting. Anyone looking at us from Mercury would see us rising on one side of the sun, and then
setting on the other side of the sun. Mercury experiences this rotation of the sun on its axis
relative to its position, in its own orbit around the sun. When subjected to resonant compounding,
this rising movement on the sun will create some positive compounding of the experienced
gravitational field strength. The setting movement of the other side of the sun will create negative
compounding of the experienced gravitational field strength. As a net result, Mercury will
experience the center of the sun as being offset away from its actual center and toward the rising
side of the sun. This will cause Mercurys orbit to precess to some degree.
In the case of Mercury, this effect is probably too small to measure. However, this
principle can be practically applied to the nuclei of atoms. Rotation of the nucleus will cause
nearly all orbits, including perfectly circular orbits, to precess relative to the atoms background.
Only purely circular, equatorial orbits are excepted by nature of their orbit. Only orbits that
remain at fully nonresonant distances are excepted because of their orbital distance. These two
situations are noted as a matter of theory. In practicality, it is unlikely that any orbits are wholly
exempted. It is unlikely that any electron could enter an electron shell with so little energy as to
be able to assume an entirely nonresonant orbit.
Because of resonance resulting from the rotation of the nucleus, electrons will experience
the nucleus center of mass/charge to shift along the nucleus equator, moving either to the left or
the right of the actual center of mass/charge. Electron orbits will then respond to the experienced
rather than to the actual center of mass/charge. This is likely to take place even if the electron
continues to maintain stable orbit within its shell because most stable orbits are elliptical in shape,
meaning that the electron continually moves into and out of resonance.
Let us consider what would be required for an electron orbit to be fully nonresonant.
Information that was emitted from the rising horizon of the nucleus arrives at the electron at noon
on the nucleus, and is re-emitted back to the nucleus so as to arrive at the setting horizon. The
nucleus will re-emit the information so as to arrive at the electron at what would be the emitting
points midnight on the nucleus. The electron will re-emit the information so as to arrive at the
nucleus rising horizon. Information is positively increased as the nucleus transmits from its rising

137
horizon and then is subsequently negatively decreased as the nucleus receives the information
back at sunset.
To be fully resonant, information that was transmitted by the nucleus at sunrise is
received back at sunrise. Information that was transmitted by the nucleus at sunset is received
back at sunset. Consequently, information transmitted at sunrise will compound so as to greatly
exceed field strength as predicted by the inverse squared law. But where will the electron
experience this increased field strength as coming from?
Bear in mind that we receive information from the present relative position of the
emitting particle, not the position of the particle at the time of emission. In the case of our
resonance with the nucleus, we experience the compounded field strength as coming from the
setting horizon of the nucleus because this is where the emitting particles are when we receive the
compounded information. We therefore experience the center of the nucleus as being offset from
the nucleus actual center toward the setting horizon of the nucleus. Because this is where we
experience the nucleus as being, we will orbit relative to this point. Because the point that we
experience is offset from the actual center of the nucleus, our orbit will precess relative to the
atoms fixed background.
Ironically enough, it appears to us that we are most attracted to those nuclear particles
that are moving away from us the fastest. This directly contradicts what we would expect.
Because no electron can remain in any orbital shell at a consistently resonant distance,
precession will apply in varying degrees and at varying times. There will be times of maximum
precession, alternating with times of no precession.

L. MOLECULAR AND CRYSTALLINE STRUCTURE

The fifth phenomenon predicted by resonance applies more to chemistry than to physics.
Perhaps we are expecting too much of the electron. As noted in the previous section, we are
expecting it to both repel other electrons and to bond with other atoms. We are expecting it to do
something else that appears to contradict its mass. How can electrons that have masses less than
one-millionth that of nuclei repel other nuclei via interaction between the electrons of two atoms?
The hypothesis that electron clouds protect the internal spaces of atoms, preventing other
nuclei from entering this space, always faced a certain conceptual problem: We live in the
gravitational perspective. This being the case, how could electron, being as light as they are, have
sufficient masses to halt the movements of relatively massive nuclei? If we were able to apply
sufficient momentum to the vastly more massive nuclei, then shouldnt we be able to use their
masses to penetrate electron shells?
At first glance, there appears to be nothing about two nuclei themselves that would repel
each other. Both have mass and charge. The mass is attractive. The charge is repulsive. Yet both
nuclei have both mass and repulsive charge within themselves, and still remain intact. If the mass
and charge within the individual nucleus is insufficiently repulsive as to cause the nucleus to fly
apart, then how can this combination of mass and charge repel other nuclei?
So we have been conceptually left to depend upon electrons and their tiny masses.
Electrons have the ability to repel other electrons, but are electromagnetically attracted to other
nuclei. How can tiny electrons perform the task of keeping nuclei apart even if they do create a
cloudlike repulsion that repels the electrons of other atoms? We have ignored what electron shells
will do to invading protons.

138
Electron shells will compound electromagnetic interactions with protons as well as
electrons. However, this compounding will be repulsive in nature. There are no electron shells
within the nucleus because the distances within the nucleus are far too short to facilitate resonance.
Therefore, protons within the nucleus are able to live with each other. At the same time, however,
the electromagnetically repulsive nature of resonant shells will attempt to keep other protons from
penetrating them. Electromagnetic repulsion will greatly exceed gravitational attraction, which is
not compounded by the resonant shells.
An atoms concentric electron shells exist regardless of whether they are occupied by
electrons or not. One could hypothesize the existence of perhaps 20 or even 30 or more concentric
shells that are relevant to events involving other atoms. Only the inner one to six have electrons in
them.
As one nucleus moves closer to another, passing from one electron shell to the next, it
encounters increasing strengths of repulsion and accelerating rates of repulsive resonant
compounding as it passes through the electromagnetically resonant distances. The nucleus will
therefore encounter a resonant shell where the repulsive effects of resonant compounding will
exceed any attraction between the atoms.
When the first nucleus reaches such a resonant distance, it will be repelled away from the
second nucleus, probably settling into the nearest nonresonant shell. Because repulsive
electromagnetic resonance can exceed gravitational attraction, and because this resonance affects
bodies vastly more massive than electrons, we may be able to dispense with the hypothesis that it
is electron clouds that repel nuclei away from each other.
Equating mutual nuclear repulsion with the activities of electrons may be the equivalent
of adding apples and oranges. The fact that both electrons and mutual nuclear repulsion are
present in no way establishes the existence of a cause/effect relationship. To some degree, we
may be doing the same with our concepts of molecular structure. We may be attributing too much
to the activities of electrons.
Clearly, electromagnetic resonance between protons will clearly play a role in the spacing
of nuclei relative to each other. This will affect molecular and crystalline structure. Gravitational
attraction plays a far greater role in interactions among nuclei than with electrons, suggesting that
there must be some countervailing force that prevents nuclei from being gravitationally attracted
into fusion. There may be complex orbital patterns involving protons and neutrons within the
nucleus. Therefore, we can expect that the effects of this resonance will be more complex than
what we encounter with electrons in their electron shells.

M. THE SIGNIFICANCE OF THE ATOM

The atom is unquestionably the fundamental unit of matter with which we interact in our
day-to-day lives, and with our everyday technologies. The atom is presently the basis of all
chemistry and of all materials technologies. However, this does not mean that we really
understand the significance of the atom.
Because we do not really understand the significance of the atom, we err. We often
attribute phenomena to the atom itself, when in fact the atoms existence itself merely manifests
more basic principles. Instead of attempting to base our understandings on these basic principles,
we have based them on assumptions concerning the significance of the atom.

139
As has been so characteristic of twentieth century theory, we find our answers in
concepts of structure. We do this rather than finding answers in interactive principles that can be
derived from entirely abstract, mathematical principles. The principles we employ need not
assume anything about the existence of a material universe. The greater the degree to which we
can employ mathematical principles and modeling, the greater will be our ability to use one of our
most powerful research tools, the supercomputer.
The nineteenth century was an age of dynamical philosophy. We attempted to define the
universe in mechanical terms as manifesting simple, mechanical principles that we employed in
our everyday technologies. The principles were interactive but entirely mechanical, or so we
believed.
The twentieth century was the age of structuralism. We attempted to define the structure
of matter/energy, and attempted to define interactive phenomena with regard to these structures.
I suggest that the twenty-first century will adopt yet another strategy. Concerning itself
with interaction to a far greater degree than with structure, twenty-first century theory will focus
on the nature of interactive mathematics itself. Unlike the quantum theory of today, this strategy
will make every effort to avoid formulas whose derivation and meaning cannot be understood.
Beginning with mathematical definition, this strategy will move to computer-generated,
spatial projections. What will mathematical definition, completely independent of the existence of
a material universe, predict in and of itself? These projections will then be compared with
physical reality. We will seek to discover whether the projections are in error, or whether our
previous understandings of physical reality have been in error.
In the second chapter, I disputed the ability of any science to discover laws that would be
binding upon all cultures and all ages. I argued that all scientific thought must be understood from
within the cultures of its times. Computer-generated projections will assist us in challenging our
cultural understandings because these projections will inevitably produce surprising predictions.
The nineteenth century was mostly an age of purely mechanical technologies, some
powered by the heat of fire. There were no computers. Mathematical theorization was limited to
those possibilities that could have been achieved using pencil and paper. Thought experiments
could be devised that were beyond the means of feasible mathematical computation.
The twentieth century has been a transitional century. Our technologies have become far
more advanced. We have developed electronic technologies, including computers. Only within
the past 20 years has it become possible to engage in the kinds of computer simulation that the
twenty-first century will require.
The principles of twenty-first century theory may prove to be quite simple. However,
their applications to interactions involving multitudes of particles will require vast numbers of
computations, all being applied to simultaneous events. Prior to the advent of the supercomputer,
this was not possible.
Lacking the computerized technology necessary to engage in such computations and
simulations in its earlier decades, the twentieth century had no choice but to think in terms of
aggregate structures. The computational technology necessary to break these aggregate structures
down into sets of interacting components did not exist.
Three or four centuries ago, we began building that great edifice of our modern age, the
cathedral of scientific knowledge. Now we find that our contexts and our technological
capabilities have changes so greatly that we can no longer use even the foundation stones of this
massive edifice. We must abandon construction and begin again.


140
Within the strategy of the twenty-first century, we can expect that the concept of the atom
will assume a different significance in both theoretical and practical senses. I suggest that we will
view it primarily as being a manifestation of the universes dependency upon angular momentum
to create stable structure. The atom and the proton within the atom are the primary structural
levels at which the universe employs angular momentum to create structure, although we still have
no idea how angular momentum came into existence.
The atom is a manifestation of structural principles involving angular momentum. These
principles are derived from mathematical principles. What we experience as structure is so
predictable, and these predictabilities so stable, that we often assume that structure establishes
inevitabilities. Referring back to the principles from which structure is derived, we will discover
particular exceptions to our rules of structure.
We must always return to our realization that, despite the importance of angular
momentum, all interparticulate interactions are still single dimensional, particle-to-particle events.
We receive light from stars that are orbiting as parts of other galaxies. The fact that they
are parts of such entities does not mean that we do not receive light from each particle
individually.
Similarly, we interact with each particle individually, even though this particle is part of
an atom. What must concern us about the atom is not its unitary nature as a single entity, but
rather how its rotational structure of internal organization affects how we interact with each of its
particles.
There are many kinds of technological achievements that we should expect to experience
as we alter our preoccupation with the atom as being the fundamental unit of matter. At present,
we have little ability to predict what these achievements will be. As our preoccupation with the
atom as a structural unit declines, then so should the distinction we draw between chemistry and
nuclear physics. Ultimately, there will be but a single discipline.

141
ENDNOTES


1 Feynman, Richard, quoted by Bolton, W., Patterns in Physics. Maidenhead, Berkshire,
U.K.: McGraw-Hill, 1974, p. 66. Taken from The Feynman Lectures in Physics, vol. 1.
2
The Oxford English Dictionary. Oxford, U.K.: Clarendon Press, 1933.
3
Gimpel, J ean, The Medieval Machine. London: Book Club Associates, 1977, preface,
pp. 1-29. (translated from French)
4
Gimpel, ibid. entire book.
5
Gimpel, ibid.
6
Gimpel, ibid., pp. 82-84, 230. For discussion of the steam engine, see Dickson, H.W.,
and J enkins, Rhys, J ames Watt and the Steam Engine. Ashbourne, Derbyshire, U.K.:
Moorland Publishing, 1927.
7
Carnot, Sadi, Reflections on the motive power of fire. Excerpted by Sambursky,
Shmuel, ed., Physical Thought from the Pre-Socratics to the Quantum Physicists. New
York: Pica Press, 1975, pp. 389-394.
8
J oule, J ames Prescott, On the mechanical equivalent of heat. Excerpted by Sambursky,
ibid., pp. 394-397.
9
Clausius, Rudolf, Heat cannot of itself pass from a colder to a warmer body. From The
Mechanical Theory of Heat (1867). Excerpted by Sambursky, ibid., pp. 405-408. See
also discussion by Sambursky, pp. 363-370.
10
Sorensen, H. J ., Creation Account. New Catholic Encyclopedia. New York: McGraw-
Hill, 1967.
11
Mueller, H., Feasts, Religious. New Catholic Encyclopedia. ibid.
12
Fohrer, Georg, Introduction to the Old Testament. Nashville: Abingdon, 1968, pp. 23-
195.
13
See Tillyard, E. M. W., The Elizabethan World Picture. New York: Vintage Books. See
also Steiner, George, After Babel. New York: Oxford University Press, 1975. Also Lea,
Kathleen M, Bacon, Francis. Encyclopaedia Britannica. Chicago: Encyclopaedia
Britannica, Inc., 1985, Vol. 14, pp. 544-549.
14
quoted by Tillyard, ibid., p. 14. Richard Hooker was an English priest who dies in 1600.
The first book of his Laws of Ecclesiastical Polity, quoted by Tillyard, was published in
1584. Hooker was particularly influential in the Anglican Church as an advocate of the
three pillars of belief: scripture, the traditions of the Church fathers, and reason. If
neither scripture nor traditions of the Church fathers provided clear answers for the
questions that one is asking, then one can and should resort to reason to obtain ones
answer. In and of itself, this argument can be perceived as sanctioning scientific inquiry,
experimentation being the aide of reason. Hookers theology was the primary reason that
England experienced very little conflict between science and religion. Religion generally
supported scientific inquiry.
15
quoted by Tillyard, ibid., p. 16.
16
Lea, ibid.
17
Galileo, Letter to the Grand Duchess Christina. Trans. And ed. Drake, Stillman,
Discoveries and Opinions of Galileo. Garden City, N.K.: Doubleday Anchor, 1957. See
also: Sambursky, ibid., p. 215. For differing opinions regarding Galileos conflict with

142

the pope, see Langford, J . J ., Galileo Galelei. New Catholic Encyclopedia, and Drake,
pp. 145-171.
18
Tillyard, ibid., p. 18.
19
quoted by Tillyard, ibid., pp. 20-21. Sebonde was a priest and a professor of theology at
the University of Toulouse, France. See National Catholic Encyclopedia.
20
Tillyard, ibid., p. 21.
21
Kuhn, Thomas S., The Structure of Scientific Revolutions. Chicago: University of
Chicago Press, 1962, pp. 24-25.
22
Kuhn, ibid., pp. 19-20.
23
Kuhn, ibid., p. 38.
24
Mirror of the World, translated from French in 1480 and later printed by Caxton. Quoted
by Tillyard, ibid., p. 39. In his notes, Tillyard states the following: The French originals
date from the middle of the thirteenth century. The work was very popular in its French
and in its English forms. Among the medieval encyclopaedias this is one of the most apt
to this book as it is both comprehensive and elementary, containing things universally
taken for granted.
25
Maxwell, J ames Clerk, A dynamical theory of the electromagnetic field, originally
from Scientific Papers, Vol. 1, 1890. Excerpted by Sambursky, ibid., p. 435. It should
be noted that dynamical is essentially synonymous with mechanical.
26
Michelson, Albert Abraham, The ether, originally from Light Waves and Their Uses,
1907. Excerpted by Sambursky, ibid., pp. 456-457.
27
Sambursky, ibid., p. 33.
28
Excerpted by Sambursky, ibid., p. 295.
29
Newton, Principia Mathematica (1687). Excerpted by Sambursky, ibid., p. 300.
30
Westfall, Richard S., Never at Rest: A Biography of Isaac Newton. Cambridge, U. K.:
Cambridge University Press, 1980, p. 302.
31
Descartes, Rene, Principia Philosophiae (1644). Excerpted by Sambursky, ibid., p. 244.
32
Einstein, Albert, Relativity: The Special and the General Theory. London: Methuen and
Co., Ltd., 1920, pp. 126-129.
33 Einstein predicted a deflection of 1.7 seconds of arc at the surface of the sun, and
declining as one moved away from the sun according to the formula 1.7 seconds of arc
divided by the distance from the center of the sun as measured in solar radii. Because of
the light of the solar corona, it isnt feasible to photograph stars located less than two
solar radii from the center of the sun. this means that the maximum experimentally
measurable deflection is less than one second of arc. Such a slight deflection might well
result from passage through an extremely sparse, nonluminous outer region of the corona.
By comparison, atmospheric refraction at sea level is measured in minutes of arc, an
order of magnitude at least 100 times greater than the solar gravitational deflection
predicted by Einstein. Therefore, the measurement of solar deflection might actually be
measuring particulate density in the vicinity of the sun rather than gravitational
deflection.
34
Einstein, ibid., p. 46.
35
Einstein, ibid., p. 47. It should be noted that Einsteinian relativity is a transformational
theory. The transit of electromagnetic radiation from one inertial perspective to another
must produce a transformation. However, Einsteinian theory differs from mine in the

143

nature of the transformation. According to Einsteinian theory, the transit of a photon
from one inertial perspective to another does not alter the mutual velocity of the sending
and receiving particles. But does alter the relationship of their inertial masses. In effect,
this means that the transit transforms electromagnetic radiation into inertial mass.
Einstein could not explain the conservation of mass/energy without resorting to the
hypothesis of some form of transformation. My transformational hypothesis is simpler.
36
Einstein, ibid., pp. 145-146.
37
Einstein, ibid., p. 153.
38
Einstein, ibid., pp. 155-156.
39
Einstein, ibid., p. 144.
40
Einstein, Albert, and Infeld, Leopold, The Evolution of Physics. Cambridge, U. K.:
Cambridge University Press, 1938, p. 263.
41
Websters Ninth New Collegiate Dictionary. Springfield, MA: Merriam-Webster, 1986.
42
Einstein and Infeld, ibid., p. 93.
43
I offer the following challenge:
I will pay one dollar out of my pocket to the first person or team to drive a
wholly non-fuel-dependent, non-solar vehicle from Times Square in New York City to
Fishermans Wharf in San Francisco, subject to the condition that batteries and/or
hydraulic accumulators cannot be recharged from any external source after leaving Times
Square. The vehicle must use decompressive traction as its primary source of power.
If this can be done, it will mark a turning point in history, the end of the Age of
Energy. For the early twenty-first century, it would be the equivalent of Charles
Lindberghs 1927 flight from New York to Paris.
Im betting a dollar that nobody can meet the challenge, and would gladly lose
my bet.

Anda mungkin juga menyukai