Anda di halaman 1dari 7

We reported on the attempts of the Swiss Mind Brain Institute to

simulate the neocortical column of the rat last year using an IBM Blue
Gene machine with 10,000 processors, and they've announced success
of the first phase of their project.

A longer term goal is to build a detailed, functional simulation of the


physiological processes in the human brain: "It is not impossible to
build a human brain and we can do it in 10 years," Henry Markram,
director of the Blue Brain Project said in 2009 at the TED conference in
Oxford.[4] In a BBC World Service interview he said: "If we build it
correctly it should speak and have an intelligence and behave very
much as a human does."

Scientists and engineers at IBM’s Almaden Research Center announced


today at the Supercomputing Conference in Portland, that they have
created the largest brain simulation to date on a supercomputer. The
number of neurons and synapses in the simulation exceed those in a
cat’s brain; previous simulations have reached only the level of mouse
and rat brains. Experts predict that the simulation will have profound
effects in two arenas: It will lead to a better understanding of how the
brain’s architecture leads to cognition, and it should inspire the design
of electronics that mimic the brain’s as-yet-unmatched ability to do
complex computation and learn using a small volume of hardware that
consumes little power.

The cortical simulator, called C2, integrates research from the fields of
computation, computer memory, communication, and neuroscience to
re-create 1 billion neurons connected by 10 trillion individual synapses.
C2 runs on “Dawn,” a BlueGene/P supercomputer at Lawrence
Livermore National Laboratory, in Livermore, Calif.
The IBM research shows that a model of the human brain—which has
20 billion neurons connected by about 200 trillion synapses—could be
reached by 2019, given enough processing power. But Johns Hopkins
University electrical and computer engineering professor Andreas
Andreou says the C2 simulator underscores an undeniable fact—to
better understand the brain, we’re going to need a better computer.

A major problem is power consumption. Dawn is one of the most


powerful and power-efficient supercomputers in the world, but it takes
500 seconds for it to simulate 5 seconds of brain activity, and it
consumes 1.4 MW. Extrapolating from today’s technology trends, IBM
projects that the 2019 human-scale simulation, running in real time,
would require a dedicated nuclear power plant.

IBM is also working separately on nanomaterials that could enable the


construction of brainlike chips. In the final phase, it plans to build a
system of 100 such chips simulating 100 million neurons and 1 trillion
synapses.

Researchers at IBM working on a project to simulate the internal wiring


of the human brain have announced that the current simulation has
surpassed the level of a cat's cortex, and now contains the equivalent
of one billion neurons and ten trillion synapses.

The simulation is running on a supercomputer called Dawn Blue


Gene/P which contains 147,456 processors and 144 terabytes of RAM.
The IBM team has predicted that by 2019, a computer with one
exaFLOPS of computing power and four Petabytes of RAM would be
able to simulate the human brain.

The project has shown steady progress: in 2006 the researchers


successfully modelled a mouse's brain, and in 2007 they replicated a
rat's brain. Earlier this year they announced they'd reached the level of
one per cent of the human cortex. The simulation models the physical
connections in the brain, and isn't meant to try and "think" like we do,
but IBM hopes that by watching how the vast array of connections in
the simulation behave, they can learn more about how the brain works.

The researchers created an algorithm called Blue Matter that


essentially maps the connections in the human brain, allowing the
scientists to better understand how the brain stores and processes
information. The idea is to give the simulation an input, and by
watching how the simulation plays out they can gain understanding of
how the brain of a living thing would work. The ultimate goal is to
exploit nanotechnology and other breakthroughs to create a new breed
of computer processors, called synaptronic processors, that think more
like living things rather than according to the conventional Von
Neumann model of computing that we use today.

The Dawn Blue Gene/P computer is rated at about 500 teraFLOPS,


which translates to about 0.0005 exaFLOPS (a FLOPS is a Floating Point
Operation Per Second, a standard measurement of a computer's
processing power). For a human brain simulation, therefore, the
researchers claim they'll need a computer that's 2,000 times faster
with about 28 times the memory. Dawn Blue Gene/P is currently the
11th fastest computer in the world according to Top500, and the top
supercomputer is four times as fast.

The human brain contains about 100 billion nerve cells called neurons,
each individually linked to other neurons by way of connectors called
axons and dendrites. Signals at the junctures (synapses) of these
connections are transmitted by the release and detection of chemicals
known as neurotransmitters. The established neuroscientific consensus
is that the human mind is largely an emergent property of the
information processing of this neural network.

Importantly, many leading neuroscientists have stated they believe


important functions performed by the mind, such as learning, memory,
and consciousness, are due to purely physical and electrochemical
processes in the brain and are governed by applicable laws. For
example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:

"Consciousness is part of the natural world. It depends, we believe,


only on mathematics and logic and on the imperfectly known laws of
physics, chemistry, and biology; it does not arise from some magical or
otherworldly quality."
"These are the network units of the brain," says Markram. Measuring
just 0.5 millimetres by 2 mm, these units contain between 10 and
70,000 neurons, depending upon the species.

I do not understand why the neocortex is a mystery to everyone. Its


neuron net circuit is repeated throughout the cortex. It consists of
excitatory and inhibitory neurons whose functions, each, have been
known for decades. The neuron net circuit is repeated over layers
whose axonal outputs feed on as inputs to other layers. The neurons of
each layer, each receive axonal inputs from one or more sending
layers and all that they can do is correlate the axonal input stimulus
pattern with their axonal connection pattern from those inputs and
produce an output frequency related to the resultant psps. Axonal
growth toward a neuron is definitely the mechanism for permanent
memory formation and it is just what is needed to implement
conditioned reflex learning. This axonal growth must be under the
control of the glial cells and must be a function of the signals
surrounding the neurons.

The cortex is known to be able to do pattern recognition and the


correlation between an axonal input stimulus and an axonal input
connection pattern is just what is needed to do pattern recognition.
However, pattern recognition needs normalized correlations and a
means to compare these correlations so that the largest correlation is
recognized by the neurons. Without normalization, the psps relative
values would not be bounded properly and could not be used to
determine the best pattern match. In order to get psps to be compared
so that the maximum psp neuron would fire, the inhibitory neuron is
needed. By having a group of excitatory neurons feed an inhibitory
neuron that feeds back inhibitory axonal signals to those excitatory
neurons, one is able to have the psps of the excitatory neurons
compared, with the neuron with the largest psps firing before the other
do as the inhibitory signal decays after each excitatory stimulus, thus
inhibiting the other excitatory neurons with the smaller psps. This
inhibitory neuron is needed in order to achieve psp comparisons, no
question about it. For a meaningful comparison, the psps must be
normalized. As unlikely as it may seem possible, it comes out that the
inhibitory connections growing by the same rules as excitatory
connections, grow to a value which accomplishes the normalization.
That is, as the excitatory axon pattern grows via conditioned reflex
rules, the inhibitory axon to each excitatory neuron grows to a value
equal to the square root of the sum of the squares of the excitatory
connections. This can be shown by a mathematical analysis of a group
of mutually inhibiting neurons under conditioned reflex learning. This
normalization does not require the neurons to behave different from as
known for decades, but rather requires that they interact with an
inhibitory neuron as described.

Thus, by simply having the inhibitory neurons receive from neighboring


excitatory neuron with large connection strengths where if the
excitatory neuron fires, the inhibitory neuron fires and by allowing the
inhibitory axonal signals be included with the excitatory axonal input
signals to the inputs to those excitatory neurons, the neo-cortex is able
to do normalized conditioned reflex pattern recognition as its basic
function.

If one thinks about it, layers of mutually inhibiting groups of neurons


are all that are needed to explain the neo-cortex functions. The layers
of neurons are able to exhibit conditioned reflex behavior between
sub-patterns, generating new learned behaviors as observed by the
human. With layer to layer feedback, multi-stable behavior of layers of
neurons results, forming short term memory patterns that become part
of the stimulus to other neurons. With normalized correlations, there is
always an axonal input stimulus pattern that will excite every
excitatory neuron.
Connect the dots: A representation of a mammalian neocortical
column, the basic building block of the cortex. The representation
shows the complexity of this part of the brain, which has now been
modeled using a supercomputer

Anda mungkin juga menyukai