Adaptively.
Evidential response.
Contextual information.
VLSI (very large scale integrated) implements
ability.
Neurobiological analogy.
A neural network can perform tasks that a linear
program can not
When an element of the neural network fails, it can
continue without any problem.
A neural network learns and does not need to be
reprogrammed.
It can be implemented in any application without
any problem.
Input layer
of
source nodes
Output layer
of
neurons
NN 1
Neural Networks
3-4-2 Network
Output
layer
Input
layer
Hidden Layer
Bias
b
x1
w1
Input x2
signal
w2
xm
wm
Synaptic
weights
Local
Field
v
Summing
function
Activation
function
()
Output
y
z-1
z-1
z-1
input
hidden
output
Supervised learning;
Unsupervised learning;
Reinforcement learning.
Sigmoidal function:
Signum and
Step
Weights:
Where
is the weight vector of
processing element and
is the weight from
processing element i (source node) to processing
element j (destination node).
Bias
Bias as Input
v wj xj
j 0
w0 b
x0 = +1
x1
Input
signal
xm
w0 b
w0
w2
w x
j 0
w1
x2
Summing
function
wm
Local
Field
Synaptic
weights
Activation
function
()
Output
y
(i)
Compet
Hardlim
Hardlims
Logsig
netinv
Poslin
Purelin
Radbas
Radbasn
Satlin
Satlins
Softmax
Tansig
Tribas
Example:
Code to create a plot of the hardlim transfer
function:
n = -5:0.1:5;
a = hardlim(n);
plot(n,a)
Assign this transfer function to layer i of a network
as:
net.layers{i}.transferFcn = 'hardlim';
Algorithms
hardlim(n) = 1 if n 0
0 otherwise
a = purelin(n)
Examples
code to create a plot of the purelin transfer function.
n = -5:0.1:5;
a = purelin(n);
plot(n,a)
Assign this transfer function to layer i of a network by
net.layers{i}.transferFcn = 'purelin';
Algorithms
a = purelin(n) = n
a= logsig(n)
Examples
Here is the code to create a plot of the logsig
transfer function.
n = -5:0.1:5;
a = logsig(n);
plot(n,a)
Assign this transfer function to layer i of a network.
net.layers{i}.transferFcn = 'logsig';
Algorithms
logsig(n) = 1 / (1 + exp(-n))
Examples
Train
Train neural network
Trainb
Batch training with weight and bias
learning rules
Trainbfg BFGS quasi-Newton backpropagation
Trainbfgc BFGS quasi-Newton backpropagation
for use with NN model reference adaptive
controller
Trainbr Bayesian regulation backpropagation
Trainbu Batch unsupervised weight/bias
training
Trainc
Cyclical order weight/bias training
Traincgb Conjugate gradient backpropagation
with Powell-Beale restarts
Traincgf Conjugate gradient backpropagation
with Fletcher-Reeves updates
Traincgp Conjugate gradient backpropagation
with Polak-Ribire updates
Traingd Gradient descent backpropagation
Traingda Gradient descent with adaptive learning
rate backpropagation
Traingdm Gradient descent with momentum
backpropagation
Traingdx Gradient descent with momentum and
adaptive learning rate backpropagation
Trainlm Levenberg-Marquardt
backpropagation
Trainoss One-step secant backpropagation
Trainr Random order incremental
training with learning functions
Trainrp Resilient backpropagation
Trainru Unsupervised random order
weight/bias training
Trains Sequential order incremental
training with learning functions
Trainscg Scaled conjugate gradient
backpropagation
%=======================================================================%
%========================================================================%
clear all
clc
t1 = cputime;
% Loading file and assigning data accordingly
data = xlsread('annallahabad.xls');
cali = data(2:150,:); % calibratoin data
vali= data(151:end,:); % validation data
% input files to the network
caliin = cali(:,1:5);
caliout = cali(:,6);
valiin = vali(:,1:5);
valiout = vali(:,6);
% FF neural network
C=[];
n=3; % three hidden neurons
net1=newff([0 18.82; 0 650.92; 9.18 34.13; 0
79.08], [n 1], {'logsig', 'purelin'}, 'trainbr');
net.IW{1,1} = [0.01 0.01 0.01 0.01 0.01 ];
net.b{1} = 0;
net.inputweight{1,1}.learnFcn = 'learngd';
net,layerweights{3,1}.learnparam.lr=0.1
net1.trainParam.epochs=500;
net1.trainParam.goal=0.001;
net1.performFcn='msereg';
net1=train(net1, caliin', caliout');
18.82; 1.34
model_output = sim(net1,valiin');
net.IW{1,1}
net.b{1}
net1.IW{1,1}
net1.b{1}
model_output'
error = abs(valiout-model_output');
a=error.*error
b=sum(a)
rmse=sqrt(b/150)
%RMSE1=sqrt((sum(square(error)))/150)
%RMSE=((sum(square(error)))/150)
Aerospace
Automotive
Banking
Defense
Electronics
Entertainment
Financial
Insurance
Manufacturing
Medical
Oil and Gas
Robotics
Speech
Securities
Telecommunications
Transportation