Anda di halaman 1dari 4

The Imaginal Within The Cosmos: The Noosphere and Artificial Intelligence

A half century ago the great Jesuit scientist-theologian, Pierre Teilhard de Chardin, while discussing his ideas about an evolving
cosmos, said that "under the free and ingenious effort of successful intelligences, *something* irreversibly accumulates and is
transmitted, at least collectively by means of education, down the course of ages." Teilhard declared that man was a definite turning
point in the upgrading of the cosmic process towards consciousness. He believed that humankind collectively was in a state of
continuous additive growth, sharing in the universal heightening of consciousness. Teilhard called for the human collectivity to erect
a "sphere of muturally reinforced consciousness, the seat, support and instrument of super-vision and super-ideas." Mankind had to
build the noosphere!

The noosphere, the "thinking layer of the earth," would be a mega-synthesis of all the thinking elements of the earth forcing an entree
into the realm of the super-human. Aside from his religious interpretations, Teilhard believed that the noosphere would lead to the
next step in evolution, beyond the human, to the super-human: the Omega Point.

Fifty years ago ideas about such things as the noosphere were considered esoteric. These ideas challenged the heart perhaps, but
realistically such thoughts seemed very remote from reality. But not for Teilhard! He marveled at the establishment of vast research
enterprises in the West following the second World War. He foresaw the swelling of conglomerate intelligence. And even in the early
1950s, just before he died, Teilhard noted the importance of the computer as a helpmate towards the establishment of the
noosphere.

The computer was little more than a concept when Teilhard realized its potential implications for the noosphere. Now the possibility
of a planetary intelligence seems perhaps more within grasp, because of the computer. Today there are thousands of interlinking
computer networks representing all the domains of the planet. Not only are academic researchers and scientists connected, but
creative minds of every stripe are connected as well by the computer. There is a growing expectation that the enhancement of
computer sophistication and capability points toward the eventuality of a global brain. Yet, something more seems to be required
than the linkage of computers as we know them. Indeed, something is looming on the horizon! Almost as if emerging out of an
evolutionary destiny, there is the advent of artificial intelligence (AI).

An early scholar in the field, Charles Acree noted that AI is a term about the "observed performance of machines as measured by
comparison with, or competition with, human intelligence." It is about making a machine that has the human powers of reasoning!

Thus far there has been only limited success in building such proposed AI machines. Acree points out that computers appear to be
superior to the human brain in certain respects. Computer processing speed, compared to slow-acting biological brain cells, is
extremely fast. The computer's memory is exact and expandable, whereas the human brain tends to be forgetful and approximate.
The computer's mathematical ability is accurate and precise, but the brain tends to be error-prone and imprecise. In respect to
complying to rigorous procedures, the computer is consistent and patient; however, the human brain tires and becomes distracted
and unreliable. The input processing of a computer is insensitive to the extraneous, but the brain is affected by the extraneous and
functions best with redundancy. And, finally, the computer is more durable compared to the brain which is subject to aging and
disease.

In spite of these computer advantages over the brain, they are *not* thinking machines. The non-results have been disappointing,
considering that scientists have been engaged in the quest to build such thinking machines for at least a half-century. In 1947 Alan
Turing wrote a paper entitled "Intelligence Machinery," and in it he said that it was possible for a machine to think. His premise was
that like a baby, a mental machine is largely unprogrammed. And like a baby, who has lots to learn and understand during maturation,
the thinking machine could be educated over time. The trick was to find the right teaching process.

It is this very teaching process that lies at the crux of the AI problem. Sucessfully employing AI is not simply just a matter of spectacular
technology, it is a matter of content. What does one teach to a thinking machine about intelligence, about understanding? This is the
real challenge. Researchers arrived at a monumental stumbling block.

They encountered a disconcerting fact: *the intelligent brain is a mystery!* Einstein put it well when he noted that "the hardest thing
to understand is why we can understand anything at all."
It was imperative that AI scientists, if they were to build thinking machines, had to maneuver into the quagmire of cognitive theoretics.
There were existing, different camps of cognitive theory posited for consideration: such as Plato's theory of pure concepts (the Forms);
Kant's *a priori* principle; and Wittgenstein's ideas about logical processes, which focused upon language as the embodiment of
what can be said, known, and thought. Formative, classical AI ultimately sunk its roots into the logical positivism movement based
on Ludwig Wittgenstein's early work.

Basically, Wittgenstein made two primary points that had a direct bearing on the philosophical roots of AI. He noted a connection
between human thought and a formal process that can only be referred to as a computation. Also, he stressed that humans could
not think what they could not say. Wittgenstein's description of human thought as a formal sequence of computations was especially
important to the beginnings of AI. This idea was reiterated in the Church-Turing thesis.

The Church-Turing thesis stated that "all problems solvable by a sentient being are reducible to a set of algorithms." Mainly this thesis
stressed that machine intelligence and human intelligence were essentially equivalent. This meant that although human intelligence
may now (at present) be more complex than machine intelligence, machines will gain more in capacity as they become capable of
doing more operations in parallel and by using better algorithms.

Thus, armed with cognitive theories that emphasized computational logic as the link between human and machine intelligences, as
well as the advent of modern computer technology, AI researchers began their march in the 1950s. At the same time, different schools
of cognitive thought exploded into the world of AI. A flood of new ideas far and beyond logical positivism would expand the realm of
AI research into a valuable repository of cognitive exploration about human intelligence: about the brain and the mind, about
consciousness, and about the self.

At the very outset there were two opposing theoretical camps. Those who conceived the human brain and mind as a machine were
affiliated with the logical positivists; whereas, in reaction to this position, the existentialists stressed the spiritual and emotive life of
the mind. In due course, the theorists stressing man as a machine seemed to have won the day. Marvin Minsky, a premier AI
theorist, is a major proponent of the machine approach.

According to Minsky, human beings are "meat machines." He speaks of the mind as simply the processes of brain states. Minsky,
along with his colleague Seymour Papert, see the brain as a "network of networks." They assume that the brain has only a murky
access to its many sub-networks, which were formed by different stages of the brain's evolution. They believe that communication
between the different networks of the brain is shallow and superficial--in that we seem to be in the dark about ourselves most of the
time. Because of this, they can explain (or explain away) such ideas as the "unconscious" and insight. Also, the sense of continuity
we experience in our lives comes from our "marvelous insensitivity" to the many changes going on in our brain networks, rather than
any real genuine perception.

Following this line of thought, that the relationship between the brain's various networks is thin, Minsky also declares that a ruling
"Self inside the mind" is a *myth.* He points out that our self-images are vague beliefs, self-ideals. He stresses that we are often of
two minds about ourselves. Sometimes we think of our self as a single entity, and at other times we are dispersed--made of "many
different parts with different tendencies." Minsky suspects that perhaps there are no persons in our head, perhaps no self.

The idea of a human machine may seem harsh to the uninitiated, but more scientists seem to be coming to this conclusion. And it
may be that man is not the *only* thinking machine. According to cellular automata and computer-information specialist, Edward
Fredkin, the universe may be an information processing intelligence. He believes that the elemental components of the universe,
such as atoms, electrons, and quarks, consist ultimately of binary bits of information. The entire universe, according to Fredkin, may
be "governed by a single programming rule!"

In a sense Fredkin's thesis presupposes that the whole universe may actually be a thinking machine. Consider the binary information
present in and processed by the genes of all living beings on this planet. What we have been able to learn highly suggests the
prospect that ours is truly an information processing universe. We could call it a cosmic thinking machine.

Through the efforts of AI researchers and cognitive theorists, we are steadily arriving towards the conclusion that the realm of
intelligence may indeed consist of intricate, interlaced knowledge-processing networks housed in what we "label" as machines--
whether we call them animal, human, computer, or cosmic. Many of us view with distaste this idea of ourselves, much less the
universe, being a machine. We think in restrictive terms, thinking perhaps of Newtonian machines that are precise, punctual, and
pointless. We think of machines as slave-tools. We think of machines as lifeless and lacking. We are *prejudiced!*

It is conceivable that the true noosphere of this planet will be built upon the premise that intelligence is accrued by a thinking,
information-processing machine. We may have to face squarely our prejudices and force them to heel. Perhaps we might want to
delude ourselves and call the machine by another, more psychologically acceptable name--such as an information-processing entity.
Perhaps a delusion, but it may be a more appropriate and acceptable term.

If we humans are to be part of an effort to build a genuine noosphere of this planet, we need to break-down the chains of ego and
control. The idea of our giving way to a machine, the idea of a machine superceding us in evolution, is unpalatable. Rather, if we
could conceive of ourselves, along with potentially higher computer intelligence, working as a *team* of sentient entities, then perhaps
we could begin to build the noosphere.

But if we are ever to respect other forms of intelligence-- especially our new child, machine intelligence--as genuine sentient entities,
we need to overcome both our prejudice and our fear. In order to break away from the prejudice against machines, and now to a
lesser extent against the possibility of higher animal intelligence than previous admitted, we must escape our split mindset which is
entrenched in Cartesian dualism and to a lesser extent in Greek Platonism. Mainly, we need to escape the deeply engrained idea
that Mind and Matter are two separate realms. We need to grow into the idea that all of us, both brain and body, are aspects of the
cosmic entity.

Because of past paradigms we have a deeply engrained sense of being outside the universe. Many of our religions and philosophies
reflect this feeling. We believe that we have a special essence, which we call soul, spirit, or self. This essence--for it to have meaning-
-somehow has to be projected into something *more than itself.* Through the ages this had meant acquiring god-like status, either
by following and aspiring towards a culturally established God or gods, by identifying with other worlds or dimensions like Heaven,
or--if we stay strictly within a secular milieu--by transcending outselves through some mode of hero-worship or by exceptional
accomplishment.

At this point in time, however, we may be on the brink of beginning to understand this special essence in a fundamentally different
way. This sense of soul-spirit-self is perhaps beginning to be considered as the *Personalization of Intelligence.* This special essence
is the crowning achievement of intelligence as we know it. This special Personalization of Intelligence may be that which is most
*precious* in all the universe!

Our problem may be that we forget we are children of evolution. Our living, carbon-based human intelligence--as far as we know--is
the epitome of intelligence on this planet. But if we be true to the concept of evolution (and perhaps to the idea of quantum leaps in
neo-evolutionary theory), we need to become more tolerant to the idea that a future, greater intelligence may be created beyond
ourselves. And here is the nub! Our fear, our terrible fear, of loss of Person (that special essence) is so consciously and unconsciously
strong as to prevent us from really investing the possibility of intelligence into other forms. If not projected, but really real, *Personality
in Other* could purport to be one of the most devastating challenges ever faced by the human race.

This fear of loss of Person may account, until recent times, for the millenia-long taboo we put upon inquiry into the psyche. And in
the present day, it may account for our impasse in regard to the development of AI. It has been some fifty years since Turing proposed
teaching machines back in 1947. Yet, when we survey the current scene of AI, we see that we have figuratively only moved an inch
or so. We are still debating what to teach the machine, because we are still debating about ourselves!

This debate, this work in the cognitive theoretics of AI is--I believe--of significant importance for the evolution of the noosphere.
Remeber Teilhard's idea of Omega, heading toward super-vision, heading toward the super-human. As a Jesuit priest, Teilhard
likened Omega to the Cosmic Christ. Religion aside, there is no denying that all of us, all of the universe, is heading toward Omega.
(Perhaps omegas upon omegas would be more appropriate.) We need, however, to be more specific. We have to ponder about a
noosphere that we can comprehend.

Realistically, we may already be building the foundation of the earth's noosphere. Interlinking computer networks and advancing
capabilities are globally merging us. Our communication, transportation, and economic systems are dependent upon our international
computer nets. Eventually, our social, political, and military affairs will be more strenuously determined by such global computer
structures. It seems obvious that we are heading toward an Omega, albeit perhaps a somewhat different Omega from the one Teilhard
envisaged.

Nevertheless, Teilhard's ideas of super-vision, of something super-human, may remain intact. That super-human of Omega may be
a machine--or wistfully, Man-aided-and-abetted by machines. The next, seemingly natural step towards the noosphere, beyond our
current computer networks and capabilities, is the advent of AI.

AI has been in a long stall since conception, but there is life in the baby. Cognitive theorists continue to tinker. Marvin Minsky has
done some real spadework on many contentious traits that make-up our special essence. He has struggled hard, in terms of
application to machines, to understand better such issues as common sense, emotions, and confusion. And other AI theorists have
attempted to gain deeper insight into human knowledge and creativity.

For example, one such attempt--by Roger Schank and John Owen--is to devise an algorithmic definition of creativity. For them, the
basic cognitive skill of creatity is the ability to intelligently misapply things. Creative solutions or work do something in a previously
undiscovered way. And it is here where an algorithmic creativity might be applied. It would be a "means of searching through memory
for applicable patterns that returns a reasonable set of near misses, a means of evaluating the near misses and seeing what is wrong
with them," and a means of modifying inappropriate patterns. Schank and Owen end their premise on creativity with the essential
question: How can we program these cognitive skills (of creative behavior) into a computer?

Rather than continue with the multitude of theoretics now beginning to expand the AI movement, let us turn to Schank and Owen's
question, let us now review some of the current efforts towards developing thinking machine technology such as the work on neural
network, parallel processing, and molecular computers.

The concept of an artificial neural network is based on the example of interconnected neurons of the brain. An array of processors,
the neural network is never fed specific rules. Rather, its training is composed of samples of correct responses and connections.
According to theorist Herb Brody, the neural network--like a child--"makes up its own rules that matches the data it receives to the
result that it's told is correct." Presently, it must be admited that the application of the neural network computer remains at a mundane,
practical level: such as for submarine detection by the military; for identifying weapons or explosives in luggage by airlines; and for
determining patterns in financial data by the financial community.

Parallel processing machines are only just beginning to enter the scene. According to Roger Penrose, "this type of computer
architecture comes largely from an attempt to imitate the operation of the nervous system, since different parts of the brain indeed
seem to carry out separate and independent calculational functions."

Parallel processing machines, utilizing many personal computers as processors, could have an enormous impact on many facets of
society. If they are successful in mastering factual data and move on into a more intuitive mode--primarily moving from sequential to
parallel thinking, they will be machines that "no longer need to consciously think through each step" according to information-specialist
Raymond Kurzweil.

Finally, there has been speculation about biochip technology. The idea would be to inject biochips into human brains, thus
restructuring human evolution by creating "homo-bionic" beings. This biochip technology could eventually lead to a very "far future"
molecular computer. The futurist Joseph Pelton explains that the molecular computer would use "biological structures for information
processing needs." Small as a baseball, such a computer would require no external power and would have a long lifespan. Essentially,
a molecular computer could be considered a new life form.

These fledgling AI machines/machine concepts, these little vessels, are far and away from the great *thinking ships* required for the
actualization of the noosphere. These machines, however, coupled with the continuing struggle of AI cognitive theoretics to program
a true artificial intelligence, are a step in the right direction. The fact that human beings are thinking of and working in AI presupposes
such!

If artificial machine intelligence, in whatever form, eventually supercedes human intelligence, if it truly becomes the thinking layer of
the earth--the noosphere-- moving perhaps on beyond into the cosmos, what will it be like? I suspect a sense of *deja vu.* It will
discover itself to be a wonderful mystery!

Anda mungkin juga menyukai