Anda di halaman 1dari 10

Lindstrom 1 Nathan W.

Lindstrom Professor Strozier Minds and Machines August 20, 2010

Artificial Brain

When the idea of artificial intelligence (AI) was first conceptualized, scientists hoped for impressive achievements, including the creation of computers with the ability to equal or even surpass human thought. These hopes have since turned out to be wildly optimistic. Conventional AI has many glaring flaws, including an inability to make decisions in slightly unfamiliar areas and a demanding need for tediously precise instructions to perform even the most basic of tasks. However, a new method of constructing an artificial brain, using neural networks (NNs), may displace the conventional AI program because NNs attack problem solving in a very different manner, allowing NN-based computers to work successfully on challenges that regular AI computers cannot. In order to understand why conventional computers fail at some tasks, we need to know how they work. Conventional computers use precisely detailed mathematical formulas called algorithms to solve known problems. Unfortunately, one very large set of problems we encounter every dayall those problems which cannot be foreseen and which occur on a highly random basiscannot be defined so precisely. In fact, the only way these situations can be described precisely to an intelligence lacking any prior knowledge of the subject, like a computer, would be to list descriptions of all of every possible solution to every possible probleman NP-complete problem itself, as problematic possibilities are indeed limitless. The difficulties in defining these random problems can be easily demonstrated by trying to describe a tree in a manner a computer could use. Typical definitions we might give would include concepts like leaves and branches. However, consider that the computer also has no concept of

Lindstrom 2 what branches or leaves are; these concepts need to be defined as well if they are to be of any use. Even if a computer could understand these ideas, how would you explain to a machine how to understand the difference between a leafless tree in the winter and a telephone pole? After all, both have branches and in the winter neither one has leaves (Abu Mostata and Psaltis 92). From the difficulties experienced trying to describe a tree in such precise terms, it seems clear that people dont use a conventional computers method of problem solving. People can identify trees with or without leaves with little difficulty, and distinguish trees from other objects like telephone poles that possess many tree like characteristics. A completely different method of problem solving that involves what scientists consider the key to random problem solving, or associative memory, is needed (Abu Mostata and Psaltis 92). While the workings of associative memory are still a mystery for neuroscientists, their existence is undeniable; it is used in nearly every moment of life. Simply stated, this phenomenon is what occurs when, for instance, a picture of home is viewed and immediate associations of family, dog, and friends are thought of. When this occurs, there is an association of many other events and objects, ranging from the arrangement of bedroom furniture to climbing a tree in the backyard, that are somehow linked in the mind to the original event, a viewing of the picture of home (Mimicking 50). If this doesnt seem logical at first, consider the following: if faces were identified using algorithms, the more faces known, the slower people would be recognized (Mimicking 52). This would happen because the algorithmic method of recognition requires the sorting through all of faces stored in memory until an exact match is found. Since Mom is not recognized more slowly with each passing year of life, a different method must be in use. This ability explains how scientists believe people attack problems in their lives; they store a vast amount of information throughout a lifespan and use associative abilities to recognize the underlying similarities and patterns in objects to somehow use this stored knowledge to reach conclusions. This theory explains why an object like a tree cannot be exactly definedno such ordered definition exists in the mind. This inherent impreciseness also reveals how people can be both slow and incorrect in comparison to using a calculator on a precisely-ordered math problem, yet can make intuitive leaps and associate similar but not exact patterns at a speed that even the worlds fastest supercomputers cannot match (Mimicking 50). Another benefit from this ability to make judgments from partial patterns is the ability to react

Lindstrom 3 and make decisions in situations that have never before been experienced, by applying rules developed from similar past experiences. As this sequence repeats itself, new experiences are accumulated for future use, and each new experience teaches how to apply previous experiences more effectively in the future. This process is called learning. Computers that are able to solve problems as humans do would be able to solve random problems, a feat conventional AI is incapable of. It thus clearly seems advantageous to develop computers capable of utilizing associative memory. In fact, as was already suggested, one of the most promising growth areas of the computer industry deals with NN that work much like the human mind. Essentially, NNs are mechanical copies of the structure of the neurons in our brain, consisting of many simple processing units possessing an extremely high number of interconnections (Trappl 150). The interconnections between specific neurons are constantly changing in strength and configuration as information is processed. This pliability of interconnections also permits the network to save information in the structure of the connections themselves, allowing for surprisingly vast quantities of data to be stored (Tank and Hopfield 113). Typically these mechanical neurons are arranged in layers with extensive connections within a layer and lesser numbers of connections to assorted neurons in the sheets above and below:

These neurons can exist in two separate states, firing and not firing. In other words, a neuron is either sending a message to all connected neurons or it is not; there are no other possibilities.

Lindstrom 4 Whether in the brain or in a computer, an individual neuron is constantly examining the other attached neurons, and from these proximate neurons, it determines its own future state (either firing or not firing) by applying a few simple rules (Mimicking 53). Unfortunately, although scientists have discovered some of the rules neurons in the brain follow, including that in which neurons increase their communication strengths when other connected neurons fire at the same time, many other rules havent yet been discovered (Allman 53). To solve a problem, the neurons work collectively using a set of specific rules, to reach a decision in a manner that closely resembles a room full of people debating an issue in search of an answer. As explained before, the networks neurons give their opinion which in turn affects other neurons, which then affect yet more connected neurons, including the original signaler. This process continues until the group reaches a pattern closely resembling the original. Also, some neuronseither through initial design or thanks to learningare very powerful communicators which, like influential human speakers, can correct the whole network when it is reaching an inaccurate decision (Allmann 53). These NN computers offer substantial advantages over conventional computers in two major areas: first, in modeling the human brain and nervous system to learn more about these organs; and second, in solving problems in areas involving pattern recognition where normal computers fail. In the first instance, modeling the human nervous system, many researchers are using NNs to explore a variety of different areas. Gary Lynch, a neurophysiologist at the University of California, Irvine, explains the main advantage of NNs when he notes that, When you try to study memories you are almost the memories youre studying, and it almost gets in the way. But if you can make a model in silicon, you can stand back from it and study it (Mimicking 53). More specifically, Lynch and Dr. Oranger have set up a 500-neuron network to model how the brain distinguishes smells. When they first began their experiment, the network responded with a unique pattern to each odor; however, after more odors had been processed, the more active neurons began producing stronger signals, eventually becoming representatives for each basic category of odor. After six samples from each group had been processed, the network had organized itself to the point that it responded with the same signal pattern for new smells of the same category. The real surprise came when smells were analyzed by the network for a

Lindstrom 5 second time; instead of responding with the old pattern, the network reconstituted itself, creating a slightly different pattern for each specific odor. Essentially, as the two researchers put it, the network decided on its first sniff that a smell was cheese; on the second sniff it classified the smell as a specific type of cheese, like cheddar (Allman 51). A more conventional study using NNs to analyze the brain was performed by Richard Zipser of the University of California, San Diego, and Richard Andersen of M.I.T. In their study they trained an NN to judge the distance between the eye (a CCD) and an object using data obtained from an EEG connected to a monkey, who was watching an identical object. Since the scientists already knew the position of the object which corresponded to specific EEG signals from the monkey, the NN could adjust itself by comparing its results to the EEG-supplied correct answers until it could consistently get the right answer (Allman 54). This example is especially important because it points out one area in which NNs may have a huge impact: permitting robots inside of fully-automated factories to visually sense their environment. Experiments like this strongly suggest that NNs coupled to CCDs could serve as the eyes for industrial robots, providing a major step forward toward solving this challenge. Finally, another group of experiments dealing with a structure found in the brains of monkeys shows a whole variety of ways in which NN modeling can model the brain. Two Harvard researchers, David Hubel and Tortson Wiesal, discovered a group of neurons in the brains of monkeys that helped them process patterns formed of bars of light. They also found that these structures developed before birth before any light could have stimulated these neuron groups. To explain this phenomenon, Ralph Linsker of IBM used an NN arranged in several layers that obeyed the laws of interneuron communication the brain uses. He then started his network with totally random connections and input an equally random stimulus. When this input had propagated up to the top layer of the NN, he found that the layer had formed structures that responded to bars of light in exactly the same way the monkey neurons did (Allman 53). These structures reappeared later in a different experiment performed by Terrence Sejnowski of John Hopkins University. In his study, he successfully trained a NN to judge how spherical an object was through analysis of the objects shadow. However, he was surprised to find that the individual neurons in his network were most active when these processors were stimulated with bars of light, despite not being trained to react to this particular stimulus. It

Lindstrom 6 turned out that the neurons in Sejnowskis network were also responding like the specialized monkey neurons that have been already mentioned. These discoveries were so persuasive that neuroscientists are currently looking for similar structures in the minds of humans (Allman 53, 54). This particular ability of NNs suggests a way to give a robot the ability to navigate through an obstacle course, such as a series of interconnecting hallways. The bottom-most layer of the NN would be connected to a CCD such that patterns seen by the CCD were reflected in the firing state of individual neurons, with darker areas promoting more rapidly firing neurons, and lighter areas causing neurons to fire more infrequently:

The top-most layer would contain an image of where the robot wanted to be; viz., at the end of the hall and not, for example, hung up on a corner:

The intermediary layers would be connected to the robots motor movement functions, such that as each succeeding NN layer adjusted itself to try and bring the CCD-supplied pattern into agreement with the target pattern of the topmost layer, electrical impulses would be sent to the robots motors, propelling it along the hallways and around corners:

Lindstrom 7

Initially, the robot would no doubt twitch and jerk and move in a seemingly aimless fashion. But as the middle NN layers learned (that is, remembered, or began to organize themselves into long-term memory stores) how to adjust themselves to bring the bottom image into increasing alignment with the top image, and thereby drove the motors, the robot would gain increasing purpose in its movements until it was unerringly navigating the hallways, almost as though it possessed an actual brain and not just many layers of NNs. However, all this modeling of an actual human brain may have severe limitations for two main reasons. First, the body and the brain are inseparably linked by a variety of bodily chemicals including hormones. These chemicals can have a large impact on the brains functioning; for example, scientists now believe that higher quantities of estrogen reduce the brains ability to work spatial problems, a theory that would explain why women tend to solve these problems less effectively than men. Also, philosophers believe that if the functioning of the brain is reduced to purely mechanical processes, free will is destroyed. In other words, if the brain follows intelligible laws, the brains reaction can be predicted from this knowledge. Human action, then, ceases to be free will and essentially becomes a machine following its programming (Casti and Karlqvist 185). But leaving aside philosophical questions for the moment, there is no doubt that NNs have wide-ranging industrial applications. For example, GTE has NNs that track changes in heat, pressure, and mixes of chemicals used to produce fluorescent light bulbs in order to determine the optimal conditions for manufacturing these products. Also, Siemens in West Germany, Fords major supplier of car heater components, had a quality control problem which these machines have been used to solve. Some of the heaters broke down easily, while others

Lindstrom 8 did not. Fortunately, the defective heaters made a slightly different sound than the good ones; so Siemens tried to train their employees to detect these noises but failed. Next, the company tried to get a conventional computer to do the job, but it failed as well. Finally, the company taught an NN how to analyze the problem by listening with microphones connected to neurons. This latter method was met with a success rate over ninety percent (OReilly 91). Finally, the military is testing a system developed by Allen Gevins of Systems Laboratories Institute to detect mental fatigue in soldiers before mistakes in judgment began to happen. Gevins system uses an EEG machine connected to an NN. While it is watching the EEG, he feeds it information on how the subject is responding, and the quality of the subjects decisions. Eventually, the NN learned what patterns of brain activity were linked to what actions, resulting in a machine that can predict whether the subject will make an accurate decision over 67% of the time (Mimicking 53). In theory, a more sophisticated network could predict human responses before they occurred by simply reading the subjects neural patterns. Unfortunately, NN technology has two types of weaknesses: those inherent to NNs, and those resulting from the youth of the overall technology. The first problem is the difficulty in limiting the amount of information that must be dealt with. For example, when Bell Laboratories designed a network to read handwritten zip codes they reported this limitation of information to be the biggest developmental problem (OReilley 92). Our brains have trillions of neurons more than an NN; yet our brains regularly edit out huge amounts of information. When we consider all of the things we forget or never even notice in the first place, the task of limiting the information an NN has to deal with comes into focus. For example, many of us probably cant remember the color of our neighbors car or what the King of Hearts is holding in his hands despite having viewed these objects hundreds of times. In addition, NNs, like us, perform algorithmic functions slowly and imprecisely (Abu Mostata and Psaltis 95). The relative youth of NN technology, however, is currently a far more serious problem. Most importantly, nearly all of the NNs in use today are not run on hardware specifically designed for such. The problem with this is that NN hardware has different needs, requiring huge numbers of interconnections between many simple processors, instead of the smaller number of connections required between the more powerful processors of todays computers. A related problem is that current hardware simply isnt fast enough to run many network

Lindstrom 9 applications. Another less important problem is that researchers dont really understand how or why these NNs really work, making the initial setting up of a new NN a very difficult task, strongly resembling an art more than a science (OReilley 90). For example, a common problem with NNs in business settings is adjusting the learning rate of the machines. If this rate is too high the machine bogs down and fails to learn anything at all; but if the rate is too slow, the machine takes too long to become useful, making it prohibitively expensive. Unfortunately, there are no hard rules yet telling how to perform this delicate procedure and qualified help is both difficult and expensive to find. As one business executive puts it, You call up the NN company and tell them you want to know how to adjust the learning rate. They dont know. Nobody one can tell you. It becomes very time consuming (OReilley 91, 92). Fortunately, though, the recent upsurge in interest should resolve these technological problems. The first international conference on NNs drew over 2,000 scientists and the market for this technology is expected to grow over 250% next year alone (Mimicking 52; OReilley 91). Also, many scientists are beginning to work on components specially made for work in NNs, and new developments in technology, especially in the areas of optic and quantum technologies (Abu Mostata and Psaltis 88; Bylinski 116), may go a long way toward solving speed and connection challenges. Although no one really knows how slow this progress will be, one thing is for sure: it will continue, for the possibilities of NNs are too vast to be ignored. They provide us with an incredibly powerful tool for understanding how our minds work and develop, and they perfectly complement the strengths of their more traditional computer counterparts. For example, networks could serve as the senses and could translate random problems into traditional forms that conventional computers could then understand; the regular machines could use their superior algorithmic speed to solve these NN-reduced problems. However, by far the most exciting possibility is the fulfillment of many scientists greatest hope, the creation of a machine that thinks as creatively as we do. As Andersen puts it, Basically, the human brain is an NN. It will take time but eventually we will reproduce it (OReilley 92, 93).

Lindstrom 10 Works Cited Abu Mostata, Yaeras S., and Demetri Psaltis. Optical Neural Computers. Scientific American (1987): 88-95. Allman, William F. How the Brain Really Works Its Wonders. U.S. News and World Report 27 June 1988: 48-54. Bylinski, Gene. A Quantum Leap in Electronics. Fortune 30 Jan. 1989: 113-120. Casti, John L. Paradigms Lost: Images of Man in the Mirror of Science. New York: William Morrow & Company, 1990. Casti, John L., and Anders Karlqvist, ed. Real Brains, Artificial Minds. New York: Estevier Science Publishing, 1987. Mimicking the Human Mind. Newsweek 20 July 1987: 52-53. OReilley, Brian. Computers that Think Like People. Fortune 27 Feb. 1989: 90-93. Pinker, Steven. How the Mind Works. New York: Norton, 2009. Rucker, Rudy. Infinity and the Mind. Princeton: Princeton UP, 1995. Tank, David W., and John J. Hopfield. Collective Computation in Neuron like Circuits. Scientific American Dec. 1987: 104-114. Trappl, Robert, ed. Cybernetics: Theory and Application. New York: Hemispher Publishing, 1983.

Anda mungkin juga menyukai