BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
CERTIFICATE
INDEX
Table of Contents Page No.
1. Abstract
2. Introduction
3. Architecture
4. Design
8. Conclusion
9. References
ARTIFICIAL NEURAL NETWORK
ABSTRACT
History of the ANNs stems from the 1940s, the decade of the first electronic
computer.
However, the first important step took place in 1957 when Rosenblatt
introduced the first concrete neural model, the perceptron. Rosenblatt also took
part in constructing the first successful neurocomputer, the Mark I Perceptron.
In 1986, The application area of the MLP networks remained rather limited until
the breakthrough when a general back propagation algorithm for a multi-layered
perceptron was introduced by Rummelhart and Mclelland.
In 1982, Hopfield brought out his idea of a neural network. Unlike the neurons
in MLP, the Hopfield network consists of only one layer whose neurons are
fully connected with each other.
Examinations of humans' central nervous systems inspired the concept of
artificial neural networks. In an artificial neural network, simple artificial nodes,
known as "neurons", "neurodes", "processing elements" or "units", are
connected together to form a network which mimics a biological neural
network.
There is no single formal definition of what an artificial neural network is.
However, a class of statistical models may commonly be called "neural" if it
possesses the following characteristics:
1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned by
a learning algorithm, and
2. Capability of approximating non-linear functions of their inputs.
INTRODUCTION
In most cases, neurons are generated by special types of stem cells. A type of
glial cell, called astrocytes (named for being somewhat star-shaped), have also
been observed to turn into neurons by virtue of the stem cell characteristic
pluripotency. In humans, neurogenesis largely ceases during adulthood but in
two brain areas, the hippocampus and olfactory bulb, there is strong evidence
for generation of substantial numbers of new neurons.[Since the 1960’s,
database and information technology has been evolving systematically from
primitive pro-cessing systems to sophisticated and powerful databases systems.
The research and development in database systems since the 1970’s has led to
the development of relational database systems, data modelling tools, and
indexing and data organization techniques. In addition, users gained convenient
and edible data access through query languages, query processing, and user
interfaces. E- Clientmethods for on-line transaction processing (OLTP), where a
query is viewed as a read-only transaction, have contributed substantially to the
evolution and wide acceptance of relational technology as a major tool for e-
client storage, retrieval, and management of large amounts of data.
The idea of ANNs is based on the belief that working of human brain by making
the right connections, can be imitated using silicon and wires as living neurons
and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They
are connected to other thousand cells by Axons. Stimuli from external
environment or inputs from sensory organs are accepted by dendrites. These
inputs create electric impulses, which quickly travel through the neural network.
A neuron can then send the message to other neuron to handle the issue or does
not send it forward. The human Neural system working is as shown below:
Mathematically ,
The network output is formed by another weighted summation of the outputs of
the neurons in the hidden layer. This summation on the output is called the
output layer. In Figure there is only one output in the output layer since it is a
single-output problem. Generally, the number of output neurons equals the
number of outputs of the approximation problem. The neurons in the hidden
layer of the network in Figure 2.5 are similar in structure to those of the
perceptron, with the exception that their activation functions can be any
differential function. The output of this network is given by
where n is the number of inputs and nh is the number of neurons in the hidden
layer.
FeedBack ANN :
Here, feedback loops are allowed. They are used in content addressable
memories.
Working of ANNs :
For example, pattern recognizing. The ANN comes up with guesses while
recognizing. Then the teacher provides the ANN with the answers. The network
then compares it guesses with the teacher’s “correct” answers and makes
adjustments according to errors.
In supervised training, both the inputs and the outputs are provided. The
network then processes the inputs and compares its resulting outputs against the
desired outputs. Errors are then propagated back through the system, causing
the system to adjust the weights which control the network. This process occurs
over and over as the weights are continually tweaked. The set of data which
enables the training is called the "training set." During the training of a network
the same set of data is processed many times as the connection weights are ever
refined.
At the present time, unsupervised learning is not well understood. This adaption
to the environment is the promise which would enable science fiction types of
robots to continually learn on their own as they encounter new situations and
new environments. Life is filled with situations where exact training sets do not
exist. Some of these situations involve military action where new combat
techniques and new weapons might be encountered. Because of this unexpected
aspect to life and the human desire to be prepared, there continues to be
research into, and hope for, this field. Yet, at the present time, the vast bulk of
neural network work is in systems with supervised learning. Supervised
learning is achieving results. This is also called Adaptive Learning.
PROPERTIES OF ANN
Computational power
The multilayer perceptron is a universal function approximator, as proven by the
universal approximation theorem. However, the proof is not constructive
regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a
specific recurrent architecture with rational valued weights (as opposed to full
precision real number-valued weights) has the full power of a Universal Turing
Machine[54] using a finite number of neurons and standard linear connections.
Further, it has been shown that the use of irrational values for weights results in
a machine with super-Turing power.
Capacity
Artificial neural network models have a property called 'capacity', which
roughly corresponds to their ability to model any given function. It is related to
the amount of information that can be stored in the network and to the notion of
complexity.
Convergence
Nothing can be said in general about convergence since it depends on a number
of factors. Firstly, there may exist many local minima. This depends on the cost
function and the model. Secondly, the optimization method used might not be
guaranteed to converge when far away from a local minimum. Thirdly, for a
very large amount of data or parameters, some methods become impractical. In
general, it has been found that theoretical guarantees regarding convergence are
an unreliable guide to practical application.
CHARACTERISTICS OF ANN
Basically Computers are good in calculations that basically takes inputs process
then and after that gives the result on the basis of calculations which are done at
particular Algorithm which are programmed in the softwares but ANN improve
their own rules, the more decisions they make, the better decisions may
become.The Characteristics are basically those which should be present in
intelligent System like robots and other Artificial Intelligence Based
Applications.
Threshold
Sigmoid
Gaussian
APPLICATIONS OF ANN
They can perform tasks that are easy for a human but difficult for a machine −
Aerospace − Autopilot aircrafts, aircraft fault detection.
Time Series Prediction − ANNs are used to make predictions on stocks and
natural calamities.
Control − ANNs are often used to make steering decisions of physical vehicles.
When an element of the neural network fails, it can continue without any
problem by their parallel nature.
DISADVANTAGES OF ANN
The computing world has a lot to gain from neural networks. Their ability to
learn by example makes them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a specific task; i.e. there is
no need to understand the internal mechanisms of that task. They are also very
well suited for real time systems because of their fast response and
computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology
and psychology. They are regularly used to model parts of living organisms and
to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that
someday 'conscious' networks might be produced. There is a number of
scientists arguing that consciousness is a 'mechanical' property and that
'conscious' neural networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we
will only get the best of them when they are integrated with computing, AI,
fuzzy logic and related subjects.