Anda di halaman 1dari 41

Index

S.N
o
ListofExperiments Date Sign
1 StudyofBiologicalNeuron&ArtificialNeuralNetworks.
2 StudyofVariousactivationfunctions&theirMatlabimplementations.
3 WAPinC++&MatlabtoimplementPerceptronTrainingalgorithm.
4 WAPinC++&MatlabtoimplementDeltalearningrule.
5 WriteanalgorithmforAdalineN/Wwithflowchart&Matlabprogram.
6 WriteanalgorithmforMadalineN/Wwithflowchart&Matlabprogram.
7 WAPinC++&MatlabtoimplementErrorBackPropogationAlgorithm.
8 StudyofGeneticAlgorithm.

9 StudyofMatlabneuralnetworktoolbox.
10 StudyofMatlabFuzzylogictoolbox.
11 WriteaMATLABprogramtoimplementFuzzySetoperation.
12 WriteaprogramtoimplementcompositiononFuzzyandCrisprelations.
13 Writeaprogramtofindunion,intersectionandcomplementoffuzzyssets.
14 WriteaMATLABprogramformaximizingf(x)=x
2
usingGA.

Remarks:

Object:1 StudyofBiologicalNeuron&WriteaboutArtificialNeuralNetworks.
BiologicalNeuron
Artificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in 1943.
These neurons were represented as models of biological networks into conceptual components for
circuits that could perform computational tasks. The basic model of the artificial neuron is founded upon
the functionality of the biological neuron. By definition, Neurons are basic signaling units of the nervous
system of a living being in which each neuron is a discrete cell whose several processes are from its cell
body

The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots
from it. The dendrites and the axon end in presynaptic terminals. The cell body is the heart of the cell. It
contains the nucleolus and maintains protein synthesis. A neuron has many dendrites, which look like a
treestructure,receivessignalsfromotherneurons.

A single neuron usually has one axon, which expands off from a part of the cell body. This I called the
axon hillock. The axon main purpose is to conduct electrical signals generated at the axon hillock down
itslength.Thesesignalsarecalledactionpotentials.
The other end of the axon may split into several branches, which end in a presynaptic terminal. The
electrical signals (action potential) that the neurons use to convey the information of the brain are all
identical. The brain can determine which type of information is being received based on the path of the
signal.
The brain analyzes all patterns of signals sent, and from that information it interprets the type of
information received. The myelin is a fatty issue that insulates the axon. The noninsulated parts of the
axon area are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is
regenerated.Thisensuresthatthesignaltraveldowntheaxontobefastandconstant.
The synapse is the area of contact between two neurons. They do not physically touch because they are
separated by a cleft. The electric signals are sent through chemical interaction. The neuron sending the
signaliscalledpresynapticcellandtheneuronreceivingtheelectricalsignaliscalledpostsynapticcell.
The electrical signals are generated by the membrane potential which is based on differences in
concentrationofsodiumandpotassiumionsandoutsidethecellmembrane.
Biological neurons can be classified by their function or by the quantity of processes they carry out.
When they are classified by processes, they fall into three categories: Unipolar neurons, bipolar neurons
andmultipolarneurons.
Unipolar neurons have a single process. Their dendrites and axon are located on the same stem.
Theseneuronsarefoundininvertebrates.
Bipolarneuronshavetwoprocesses.Theirdendritesandaxonhavetwoseparatedprocessestoo.
Multipolar neurons: These are commonly found in mammals. Some examples of these neurons are
spinalmotorneurons,pyramidalcellsandpurkinjecells.
When biological neurons are classified by function they fall into three categories. The first group is
sensory neurons. These neurons provide all information for perception and motor coordination. The
second group provides information to muscles, and glands. There are called motor neurons. The last
group, the interneuronal, contains all other neurons and has two subclasses. One group called relay or
protection interneurons. They are usually found in the brain and connect different parts of it. The other
groupcalledlocalinterneuronsareonlyusedinlocalcircuits.
ArtificialNeuralNetwork
An artificial neural network is a system based on the operation of biological neural networks, in other
words, is an emulation of biological neural system. Why would be necessary the implementation of
artificial neural networks? Although computing these days is truly advanced, there are certain tasks that
a program made for a common microprocessor is unable to perform even so a software implementation
ofaneuralnetworkcanbemadewiththeiradvantagesanddisadvantages.

Advantages:
Aneuralnetworkcanperformtasksthatalinearprogramcannot.

When an element of the neural network fails, it can continue without any problem by their
parallelnature.
Aneuralnetworklearnsanddoesnotneedtobereprogrammed.
Itcanbeimplementedinanyapplication.
Itcanbeimplementedwithoutanyproblem.

Disadvantages:
Theneuralnetworkneedstrainingtooperate.
The architecture of a neural network is different from the architecture of microprocessors
thereforeneedstobeemulated.
Requireshighprocessingtimeforlargeneuralnetworks.
Another aspect of the artificial neural networks is that there are different architectures, which
consequently requires different types of algorithms, but despite to be an apparently complex
system,aneuralnetworkisrelativelysimple.
Artificial neural networks (ANN) are among the newest signalprocessing technologies in the
engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to
the engineering perspective. In engineering, neural networks serve two important functions: as
pattern classifiers and as nonlinear adaptive filters. We will provide a brief overview of the
theory, learning rules, and applications of the most important neural network models. Definitions
and Style of Computation An Artificial Neural Network is an adaptive, most often nonlinear
system that learns to perform a function (an input/output map) from data. Adaptive means that
the system parameters are changed during operation, normally called the training phase . After
the training phase the Artificial Neural Network parameters are fixed and the system is
deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is
built with a systematic stepbystep procedure to optimize a performance criterion or to follow
some implicit internal constraint, which is commonly referred to as the learning rule . The
input/output training data are fundamental in neural network technology, because they convey
the necessary information to "discover" the optimal operating point. The nonlinear nature of the
neural network processing elements (PEs) provides the system with lots of flexibility to achieve
practically any desired input/output map, i.e., some Artificial Neural Networks are universal
mappers.Thereisastyleinneuralcomputationthatisworthdescribing.

An input is presented to the neural network and a corresponding desired or target response set
at the output (when this is the case the training is called supervised ). An error is composed from
the difference between the desired response and the system output. This error information is fed
back to the system and adjusts the system parameters in a systematic fashion (the learning rule).
The process is repeated until the performance is acceptable. It is clear from this description that
the performance hinges heavily on the data. If one does not have data that cover a significant
portion of the operating conditions or if they are noisy, thenneural networktechnology is
probably not the right solution. On the other hand, if there is plenty of data and the problem is
poorly understood to derive an approximate model, then neural network technology is a good
choice. This operating procedure should be contrasted with the traditional engineering design,
made of exhaustive subsystem specifications and intercommunication protocols. In artificial
neural networks, the designer chooses the network topology, the performance function, the
learning rule, and the criterion to stop the training phase, but the system automatically adjusts the
parameters. So, it is difficult to bring a priori information into the design, and when the system
does not work properly it is also hard to incrementally refine the solution. But ANNbased
solutions are extremely efficient in terms of development time and resources, and in many
difficult problems artificial neural networks provide performance that is difficult to match with
other technologies. Denker 10 years ago said that "artificial neural networks are the second best
way to implement a solution" motivated by the simplicity of their design and because of their
universality, only shadowed by the traditional design obtained by studying the physics of the
problem. At present, artificial neural networks are emerging as the technology of choice for
manyapplications,suchaspatternrecognition,prediction,systemidentification,andcontrol.
NeuralNetworktopologies
In the previous section we discussed the properties of the basic processing unit in an artificial
neural network. This section focuses on the pattern of connections between the units and the
propagation of data. As for this pattern of connections, the main distinction we can make is
between:
Feedforward neural networks, where the data ow from input to output units is strictly
feedforward. The data processing can extend over multiple (layers of) units, but no
feedback connections are present, that is, connections extending from outputs of units to
inputsofunitsinthesamelayerorpreviouslayers.
Recurrent neural networksthat do contain feedback connections. Contrary to feedforward
networks, the dynamical properties of the network are important. In some cases, the
activation values of the units undergo a relaxation process such that the neural network will
evolve to a stable state in which these activations do not change anymore. In other
applications, the change of the activation values of the output neurons are significant, such
that the dynamical behaviour constitutes the output of the neural network (Pearlmutter,
1990).
Classical examples of feedforward neural networks are the Perceptron and Adaline. Examples
ofrecurrentnetworkshavebeenpresentedbyAnderson
(Anderson,1977),Kohonen(Kohonen,1977),andHopfield(Hopfield,1982).

TrainingofArtificialneuralnetworks
Aneural networkhas to be configured such that the application of a set of inputs produces
(either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the
strengths of the connections exist. One way is to set the weights explicitly, using a priori
knowledge. Another way is to'train' the neural networkby feeding it teaching patterns and
lettingitchangeitsweightsaccordingtosomelearningrule.
Wecancategoriesthelearningsituationsintwodistinctsorts.Theseare:
Supervised learningor Associative learning in which the network is trained by providing it with
input and matching output patterns. These inputoutput pairs can be provided by an external
teacher,orbythesystemwhichcontainstheneuralnetwork(selfsupervised).

Unsupervised learningor Selforganization in which an (output) unit is trained to respond to


clusters of pattern within the input. In this paradigm the system is supposed to discover
statistically salient features of the input population. Unlike the supervised learning paradigm,
there is no a priori set of categories into which the patterns are to be classified rather the system
mustdevelopitsownrepresentationoftheinputstimuli.
Reinforcement LearningThis type of learning may be considered as an intermediate form of the
above two types of learning. Here the learning machine does some action on the environment
and gets a feedback response from the environment. The learning system grades its action good
(rewarding) or bad (punishable) based on the environmental response and accordingly adjusts
its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs,
following which there will be no more changes in its parameters. The self organizing neural
learningmaybecategorizedunderthistypeoflearning.

Object:2 StudyofVariousactivationfunctions&theirMatlabimplementations.

Activationfunctions
The activation function acts as a squashing function, such that the output of a neuron in a neural network
is between certain values (usually 0 and 1, or 1 and 1). In general, there are three types of activation
functions, denoted by (.) . First, there is the Threshold Function which takes on a value of 0 if the
summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater
thanorequaltothethresholdvalue.

Secondly, there is the PiecewiseLinear function. This function again can take on the values of 0 or 1,
but can also take on values between that depending on the amplification factor in a certain region of
linearoperation.

Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes
usefultousethe1to1range.Anexampleofthesigmoidfunctionisthehyperbolictangentfunction.

The artifcial neural networks which we describe are all variations on the parallel distributed processing
(PDP) idea. The architecture of each neural network is based on very similar building blocks which
perform the processing. In this chapter we first discuss these processing units and discuss diferent neural
networktopologies.Learningstrategiesasabasisforanadaptivesystem

Object:3 ExplainPerceptronTrainingAlgorithm

Perceptron
The perceptron is an algorithm for supervised classification of an input into one of two possible outputs.
It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear
predictor function combining a set of weights with the feature vector describing a given input. The
learning algorithm for perceptrons is an online algorithm, in that it processes elements in the training set
oneatatime.
The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank
Rosenblatt.
[1]

In the context of artificial neural networks, the perceptron algorithm is also termed the singlelayer
perceptron, to distinguish it from the case of a multilayer perceptron, which is a more complicated
neural network. As a linear classifier, the (singlelayer) perceptron is the simplest kind of feedforward
neuralnetwork.
The perceptron is a binary classifier which maps its input (a realvalued vector) to an output value
(asinglebinaryvalue):

where is a vector of realvalued weights, is the dot product (which here computes a weighted
sum),and isthe'bias',aconstanttermthatdoesnotdependonanyinputvalue.
The value of (0 or 1) is used to classify as either a positive or a negative instance, in the case of
a binary classification problem. If is negative, then the weighted combination of inputs must produce a
positive value greater than in order to push the classifier neuron over the 0 threshold. Spatially, the
bias alters the position (though not the orientation) of the decision boundary. The perceptron learning
algorithmdoesnotterminateifthelearningsetisnotlinearlyseparable.
PerceptronTrainingAlgorithm
Below is an example of a learning algorithm for a (singlelayer) perceptron. For multilayer perceptrons,
where a hidden layer exists, more complicated algorithms such as backpropagation must be used.
Alternatively, methods such as the delta rule can be used if the function is nonlinear and differentiable,
althoughtheonebelowwillworkaswell.
When multiple perceptrons are combined in an artificial neural network, each output neuron operates
independentlyofalltheothersthus,learningeachoutputcanbeconsideredinisolation.
Wefirstdefinesomevariables:
denotestheoutputfromtheperceptronforaninputvector .
isthebiasterm,whichintheexamplebelowwetaketobe0.
isthetrainingsetof samples,where:

o isthe dimensionalinputvector.
o isthedesiredoutputvalueoftheperceptronforthatinput.
Weshowthevaluesofthenodesasfollows:
isthevalueofthe thnodeofthe thtraininginputvector.
.
Torepresenttheweights:
isthe thvalueintheweightvector,tobemultipliedbythevalueofthe thinputnode.
An extra dimension, with index , can be added to all input vectors, with , in which
case replacesthebiasterm.Toshowthetimedependenceof ,weuse:
istheweight attime .
isthelearningrate,where .
Too high a learning rate makes the perceptron periodically oscillate around the solution unless additional
stepsaretaken.

The appropriate weights are applied to the inputs, and the resulting weighted sum passed to a function
thatproducestheoutputy.
Learningalgorithmsteps
1. Initialise weights and threshold. Note that weights may be initialised by setting each weight node
to0ortoasmallrandomvalue.Intheexamplebelow,wechoosetheformer.
2. For each sample in our training set , perform the following steps over the input and desired
output :
2a.Calculatetheactualoutput:

2b.Adaptweights:

,forallnodes .

Step 2 is repeated until the iteration error is less than a userspecified error threshold ,
or a predetermined number of iterations have been completed. Note that the algorithm adapts the
weights immediately after steps 2a and 2b are applied to a pair in the training set rather than waiting until
allpairsinthetrainingsethaveundergonethesesteps.

Object:4 WriteaboutDeltalearningrule

DeltaRule

Delta rule is a generalization of the perceptron training algorithm. It extends the technique to continuous
input and outputs in perceptron training Algorithm a term delta is introduced which is the difference
betweenthedesiredOrtargetoutputTandactualoutputaDelta=(TA)
Hereifdelta=0,theo/piscorrect&nothingisdone

Ifdelta>0,theo/pisincorrect&is0,addeach
I/ptoitscorrespondingwt.
Ifdelta<0,theo/pisincorrectandis1
Subracteachi/pfromitscorrespondingwt.
Itlearningratecoefficientofmultiplesthedeltax,
Producttoallowcontroloftheavg.sizeofwtchanges
I=()Wi(n+1)=wi(n)+deltaI
WheredeltaI=thecorrectionassociatedwiththeithinputxi
Wi(n+1)=valueofw+Ioftenadjustment
Wi(n)=valueofw+Ibeforeadjustment

ImplementationofDeltaRule

#include<<iostream.h>>
#include<<conio.h>>
Voidmain()
{
clrscr()
floatinput[3],d,weight[3],delta
for(intI=0I<3I++)
{cout<<\ninitilizeweightvector<<I<<\t
cin>>input[I]
}
cout<<\nenterthedesiredoutput\t
cin>>d
do
{for
del=da
if(del<0)
for(I=0I<3I++)
w[I]=w[I]input[I]
elseif(del>0)
for(I=0I<3I++)
weight[I]=weight[I]+input[I]
for(I=0I<3I++)
{
val[I]=del*input[I]
weight[+1]=weight[I]+val[I]
}
cout<<\valueofdeltais<<del
cout<<\nweighthavebeenadjusted
}while(delnotequal0)
if(del=0)
cout<<\noutputiscorrect
}

Object:5 WriteanalgorithmforAdalineN/Wwithflowchart.

AdalineNetwork

TheAdalinenetworktrainingalgorithmisasfollows:

Step0:Weightsandbiasaresettosomerandomvaluesbutnotzero.Setthelearningrateparameter
.

Step1:Performstep26whenstoppingconditionisfalse.
Step2:Performstep35foreachbipolartrainingpairs:t.
Step3:SetactivationsforinputunitsI=1ton.

xi=si

Step4:Calculatethenetinputtotheoutputunit.

Yin=b+xiwi

Step5:UpdatetheweightsandbiasforI=1ton:

Wi(new)=wi(old)+(tyin)xi

b(new)=b(old)+(tyin)

Step6:Ifthehighestweightchangethatoccurredduringthetrainingissmallerthanaspecifiedtolerance
thestopthetrainingprocess,elsecontinue.Thisisthetestforstoppingconditionofanetwork

TestingAlgorithm:

Step0:Initializedtheweights.
Step1:Performstep24foreachbipolarinputunitstox.
Step2:Settheactivationsoftheinputunitstox.
Step3:Calculatethenetinputtotheoutputunit.

Yin=b+x
i
w
i

Step4:Applytheactivationfunctionoverthenetinputcalculated:

1ifyin>=0

1ifyin<0

Object:6 WriteanalgorithmforMadalineN/Wwithflowchart.

MadalineNetwork

TheMadalinenetworktrainingalgorithmisasfollows:

Step0:Initializedtheweights.Alsosetinitiallearningrate.
Step1:Whenstoppingconditionisfalse,performstep23
Step2:Foreachbipolartrainingpairs:t,performstep37
Step3:Activateinputlayerunits.fori=1ton.

x
i
=s
i

Step4:CalculatethenetinputtoeachhiddenAdalineunit.:
z
inj
=b
j
+x
i
w
ij
,j=1tom
i=1
Step5:Calculatetheoutputtoeachhiddenunit:

z
j
=f(z
inj
)

Step6:Findtheoutputofthenet:
Y
in
=b
O
+z
i
v
j

J=1

Y=f(yin)

Step7:Calculatetheerrorandupdatetheweights:

1. Ift=y,noweightupdationisrequired.
2. Iftyandt=+1,updateweightsonz
j
,wherenetinputisclosestto0(zero):
b
j
(new)=b
j
(old)+(1z
inj
)

w
ij
(new)=w
ij
(old)+(1z
inj
)x
i

3. Iftyandt=1,updateweightsonzkwhosenetinputispositive:

w
ik
(new)=w
ik
(old)+(1z
ink
)x
i

b
k
(new)=b
k
(old)+(1z
ink
)

Step8:Testforstoppingcondition.

Object:7 WriteaprogramtoimplementErrorBackPropagationAlgorithm

AlgorithmErrorforBackpropagation

Start with randomly chosen weight while Mse is inoat is factory and computational bounds are
notexceeds,do
Foreachinputpatternn&desinedo/pvectodj
Computehiddennodeo/px(1)
Computethen/wo/pvectoroj
Computetheerrorbetweenojanddesiredo/pvectordj
Modifytheweightsbetweenhidden&o/pnodes.

Wk,j(2,1)=(dkOk)Ok(1Ok)Xj(1)
Modifytheweightbetweeni/p&hiddennodes

Wj,j(1,0)=(dkOk)Ok(1Ok)Wk,j(2,1)Xj(1)(1Xj(1))Xj(0)

Endfor
Endwhile.

ProgramforBackpropagationAlgorithm

ProgramCode:

#include<:iostream.h>
#include<process.h>
#include<conio.h>
floatx=0I=0,j=0,r=0
floathard(floatI,floatm,floatw[20][20])
floatp[5][5],floatt[5],floata
{
floatn=0,s=0,e=0,pt[50][50]
for(intI=0,I<2I++)
{
S=stw[i][j]*p[i][j]
}
ifyes(y==1)
for(j=0j<2j++)
{
pt[i][j]=2x9+p[i][j]+w[i][j]
w[I+1][j]w[2][j]+pt[i][j]
S=s+w[i][j]+pt[i][j]
}
}

if(s==t[i])
{
w[I+1][0]=w[1][0]

w[I+1][1]=w[1][2]
x++
if(x>(m1))
{
if(y==0)
{
cout<<\nThewtb/winputinput&hiddennodeis:\nw=[
cout<<w[I+1]<<<<w[I+1][1]<<]
}
if(y==1)
{
cout<<\n\nTwoweightb/whiddenandoutputnodeis\nw=[
cout<<[I+1][0]<<<<w[I+1][1]<<]
exit[0]
}
I++
Returnx
Elseif(s!=t[i])
E=t[i]s
If(y==1)
{
cout<<\n\nThewtb/whiddenandoutputnodesis:\nw=[
cout<<w[I+1][0]<<<<w[I+1][1]<<]
exit[0]
for(j=0j<2j++)
{
pt[i][j]=2+9*I*p[i][j]
w[I+1][j]=w[i][j]+pt[i][j]
}
x
I++
Returnx
Voidmain()
{
clrscr()
{
float2,n,p[5][5],b[20],t[5],e
floatw[20][20]
float<<\nEnterthelearningRule:
cin>>a
for(I=0I<nI++)
{
cout<<Entertheinputp<<I+1<<i
for(I=0I<2I++)
{

cin>>p[i][j]
}
cout<<\nEntertheTarget<<I+1<<]
cin>>t[1]
}
cout<<\nEnterthewtvector
cout<<\nw[o]
for[I=0I<2I++]
{
cin>>w[i][j]
for(I=0I<nI++)
{
x=hard(I,n,w,p,t,a)
n=2
if(I=(n1))
{
I==1
{
}
getch()
}

Object:8StudyofGeneticAlgorithm

GeneticAlgorithm
Professor John Holland in 1975 proposed an attractive class of computational models, called
Genetic Algorithms (GA), that mimic the biological evolution process for solving problems in a
wide domain. The mechanisms under GA have been analyzed and explained later by Goldberg,
De Jong, Davis, Muehlenbein, Chakraborti, Fogel, Vose and many others. Genetic Algorithms
has three major applications, namely, intelligent search, optimization and machine learning.
Currently, Genetic Algorithms is used along with neural networks and fuzzy logic for solving
more complex problems. Because of their joint usage in many problems, these together are
often referred to by a generic name: softcomputing. A Genetic Algorithms operates through
asimplecycleofstages:
i)Creationofapopulationofstrings,
ii)Evaluationofeachstring,
iii)Selectionofbeststringsand
iv)Geneticmanipulationtocreatenewpopulationofstrings.
ThecycleofaGeneticAlgorithmsispresentedbelow
Each cycle in Genetic Algorithms produces a new generation of possible solutions for a
given problem. In the first phase, an initial population, describing representatives of the
potential solution, is created to initiate the search process. The elements of the population
are encoded into bitstrings, called chromosomes. The performance of the strings, often
called fitness, is then evaluated with the help of some functions, representing the constraints
of the problem. Depending on the fitness of the chromosomes, they are selected for a
subsequent genetic manipulation process. It should be noted that the selection process is
mainly responsible for assuring survival of the bestfit individuals. After selection of the
population strings is over, the genetic manipulation process consisting of two steps is carried
out. In the first step, the crossover operation that recombines the bits (genes) of each two
selected strings (chromosomes) is executed. Various types of crossover operators are
foundintheliterature.Thesinglepointandtwopointscrossoveroperationsareillustrated

The crossover points of any two chromosomes are selected randomly. The second step in
the genetic manipulation process is termed mutation, where the bits at one or more
randomly selected positions of the chromosomes are altered. The mutation process helps to
overcome trapping at local maxima. The offsprings produced by the genetic manipulation
processarethenextpopulationtobeevaluated.

Fig.:Mutationofachromosomeatthe5thbitposition.

Example:The Genetic Algorithms cycle is illustrated in this example for maximizing a function
f(x) = x2 in the interval 0 = x = 31. In this example the fitness function is f (x) itself. The larger is
the functional value, the better is the fitness of the string. In this example, we start with 4 initial
strings. The fitness value of the strings and the percentage fitness of the total are estimated in
Table A. Since fitness of the second string is large, we select 2 copies of the second string and
one each for the first and fourth string in the mating pool. The selection of the partners in the
mating pool is also done randomly. Here in table B, we selected partner of string 1 to be the
2nd string and partner of 4th string to be the 2nd string. The crossover points for the
firstsecond and secondfourth strings have been selected after oth and 2nd bit positions
respectively in table B. The second generation of the population without mutation in the first
generationispresentedintableC.

TableA
TableB:

TableC:

A Schema (or schemata in plural form) / hyper plane or similarity template is a genetic pattern
with fixed values of 1 or 0 at some designated bit positions. For example, S = 01?1??1 is a
7bit schema with fixed values at 4bits and don't care values, represented by ?, at the
remaining 3 positions. Since 4 positions matter for this schema, we say that the schema contains
4genes.
DeterministicExplanationofHolland'sObservation
ToexplainHolland'sobservationinadeterministicmannerletuspresumethefollowingassumptions.
i)Therearenorecombinationoralternationstogenes.
ii) Initially, a fraction f of the population possesses the schema S and those individuals
reproduceatafixedrater.
iii)AllotherindividualslackingschemaSreproduceatarates<r.
ThuswithaninitialpopulationsizeofN,aftertgenerations,wefindNfrtindividualspossessing
schemaSandthepopulationoftherestoftheindividualsisN(1f)st.Therefore,thefractionofthe
individualswithschemaSisgivenby

Forsmalltandf,theabovefractionreducestof(r/s)t,whichmeansthepopulationhavingthe
schemaSincreasesexponentiallyatarate(r/s).Astochasticproofoftheabovepropertywillbe
presentedshortly,videawellknowntheorem,calledthefundamentaltheoremofGeneticalgorithm.
StochasticExplanationofGeneticAlgorithms
ForpresentationofthefundamentaltheoremofGeneticAlgorithms,thefollowingterminologiesare
definedinorder.
Definition:TheorderofaschemaH,denotedbyO(H),isthenumberoffixedpositionsinthe
schema.Forexample,theorderofschemaH=?001?1?is4,sinceitcontains4fixedpositions.
Forexample,theschema?1?001hasadefininglengthd(H)=40=4,while
thed(H)of???1??iszero.
Definition:TheschemasdefinedoverLbitstringsmaybe
geometricallyinterpretedashyperplanesinanLdimensionalhyperspace
(abinaryvectorspace)witheachLbitstringrepresentingonecornerpointin
anndimensionalcube.

Object:9StudyofMatlabneuralnetworktoolbox.

Matlabneuralnetworktoolbox.

The Matlab neural network toolbox provides a completeset of functions and a graphical user
interfaceforthedesign,implementation,visualization,andsimulationofneuralnetworks.
It supports the most commonly used supervised and unsupervised network architectures and a
comprehensivesetoftrainingandlearningfunctions.

KEYFEATURES

Graphicaluserinterface(GUI)forcreating,training,andsimulatingyourneuralnetworks.
Supportforthemostcommonlyusedsupervisedandunsupervisednetworkarchitectures.
Acomprehensivesetoftrainingandlearningfunctions.
A suite of Simulink blocks, as well as documentation and demonstrations of control system
applications.
AutomaticgenerationofSimulinkmodelsfromneuralnetworkobjects.
Routinesforimprovinggeneralization.

GENERALCREATIONOFNETWORK

net=network
net=network(numInputs,numLayers,biasConnect,inputConnect,layerConnec,outputConnect,targetConn
ect)

Description

NETWORK creates new custom networks. It is used to create networks that are then customized by
functionssuchasNEWP,NEWLIN,NEWFF,etc.
NETWORKtakestheseoptionalarguments(shownwithdefaultvalues):
numInputsNumberofinputs,0.
numLayersNumberoflayers,0.
biasConnectNumLayersby1Booleanvector,zeros.
inputConnect NumLayersbynumInputsBooleanmatrix,zeros.
layerConnectNumLayersbynumLayersBooleanmatrix,zeros.
outputConnect1bynumLayersBooleanvector,zeros.
targetConnect1bynumLayersBooleanvector,zeros,andreturns,
NET Newnetworkwiththegivenpropertyvalues.

TRAINANDADAPT

1. Incremental training : updating the weights after the presentation of each single training
sample.
2. Batchtraining:updatingtheweightsaftereachpresentingthecompletedataset.

When using adapt, both incremental and batch training can be used . When using train on the
other hand, only batch training will be used, regardless of the format of the data. The big plus of train is
that it gives you a lot more choice in training functions (gradient descent, gradient descent w/
momentum,LevenbergMarquardt,etc.)whichareimplementedveryefficiently.
The difference between train and adapt: the difference between passes and epochs. When using adapt,
the property that determines how many times the complete training data set is used for training the
network is called net.adaptParam.passes. Fair enough. But, when using train, the exact same property is
nowcallednet.trainParam.epochs.
>>net.trainFcn='traingdm'
>>net.trainParam.epochs=1000
>>net.adaptFcn='adaptwb'
>>net.adaptParam.passes=10

TRAININGFUNCTIONS

Thereareseveraltypesoftrainingfunctions:

1. Supportedtrainingfunctions,
2. Supportedlearningfunctions,
3. Transferfunctions,
4. Transferderivativefunctions,
5. Weightandbiasinitializefunctions,
Weightderivativefunctions.

SUPPORTEDTRAININGFUNCTIONS
trainb Batchtrainingwithweightandbiaslearningrules
trainbfg BFGSquasiNewtonbackpropagation
trainbr Bayesianregularization
trainc Cyclicalorderincrementalupdate
traincgb PowellBealeconjugategradientbackpropagation
traincgf FletcherPowellconjugategradientbackpropagation
traincgp PolakRibiereconjugategradientbackpropagation
traingd Gradientdescentbackpropagation
traingda Gradientdescentwithadaptivelearningratebackpropagation
traingdm Gradientdescentwithmomentumbackpropagation
traingdx Gradientdescentwithmomentum&adaptivelinearackpropagation
trainlm LevenbergMarquardtbackpropagation
trainoss Onestepsecantbackpropagations
trainr Randomorderincrementalupdate
trainrp Resilientbackpropagation(Rprop)
trains Sequentialorderincrementalupdate
trainscg Scaledconjugategradientbackpropagation

SUPPORTEDLEARNINGFUNCTIONS

learncon Consciencebiaslearningfunction
learngd Gradientdescentweight/biaslearningfunction
learngdm Gradientdescentwithmomentumweight/biaslearningfunction
learnh Hebbweightlearningfunction
learnhd Hebbwithdecayweightlearningrule
learnis Instarweightlearningfunction
learnk Kohonenweightlearningfunction
learnlv1 LVQ1weightlearningfunction
learnlv2 LVQ2weightlearningfunction
learnos Outstarweightlearningfunction
learnp Perceptronweightandbiaslearningfunction
learnpn Normalizedperceptronweightandbiaslearningfunction
learnsom Selforganizingmapweightlearningfunction
learnwh WidrowHoffweightandbiaslearningrule

TRANSFERFUNCTIONS

compet Competitivetransferfunction.
hardlim Hardlimittransferfunction.
hardlims Symmetrichardlimittransferfunction.
logsig Logsigmoidtransferfunction.
poslin Positivelineartransferfunction.
purelin Lineartransferfunction.
radbas Radialbasistransferfunction.
satlin Saturatinglineartransferfunction.
satlins Symmetricsaturatinglineartransferfunction.
softmax Softmaxtransferfunction.
tansig Hyperbolictangentsigmoidtransferfunction.
tribas Triangularbasistransferfunction.

TRANSFERDERIVATIVEFUNCTIONS

Dhardlim Hardlimittransferderivativefunction.
dhardlms Symmetrichardlimittransferderivativefunction
dlogsig Logsigmoidtransferderivativefunction.
dposlin Positivelineartransferderivativefunction.
dpurelin Hardlimittransferderivativefunction.
dradbas Radialbasistransferderivativefunction.
dsatlin Saturatinglineartransferderivativefunction.
dsatlins Symmetricsaturatinglineartransferderivativefunction.
dtansig Hyperbolictangentsigmoidtransferderivativefunction
dtribas Triangularbasistransferderivativefunction.

WEIGHTANDBIASINITIALIZATIONFUNCTIONS

initcon Consciencebiasinitializationfunction.
initzero Zeroweight/biasinitializationfunction.
midpoint Midpointweightinitializationfunction.
randnc Normalizedcolumnweightinitializationfunction.
randnr Normalizedrowweightinitializationfunction.
rands Symmetricrandomweight/biasinitializationfunction

WEIGHTDERIVATIVEFUNCTIONS

ddotprod Dotproductweightderivativefunction.

NEURALNETWORKTOOLBOXGUI

1. The graphical user interface (GUI) is designed to be simple and user friendly. This tool lets you
importpotentiallylargeandcomplexdatasets.
2. The GUI also enables you to create, initialize, train, simulate, and manage the networks. It has
theGUINetwork/DataManagerwindow.

3. The window has its own work area, separate from the more familiar command line workspace.
Thus, when using the GUI, one might "export" the GUI results to the (command line)
workspace.Similarlyto"import"resultsfromthecommandlineworkspacetotheGUI.
4. Once the Network/Data Manager is up and running, create a network, view it, train it, simulate
it and export the final results to the workspace. Similarly, import data from the workspace for
useintheGUI.

Agraphicaluserinterfacecanthusbeusedto
1. Createnetwork,
2. Createdata,
3. Trainthenetworks,
4. Exportthenetworks,
5. Exportthedatatothecommandlineworkspace.

CONCLUSION

ThepresentationhasgivenanoverviewofNeuralNetworktoolboxinMATLAB.

Object:10StudyofMatlabFuzzylogictoolbox.

MatlabFuzzylogictoolbox.

FuzzylogicinMatlabcanbedealtveryeasilyduetotheexistingnewFuzzyLogicToolbox.
Thisprovidesacompletesetoffunctionstodesignanimplementvariousfuzzylogicprocesses.
Themajorfuzzylogicoperationsinclude:
fuzzification,
defuzzification,
fuzzyinference.
These all are performed by means of various functions and even can be implemented using the
GraphicalUserInterface.

Thefeaturesare:
It provides tools to create and edit Fuzzy Inference Systems(FIS). It allows integrating fuzzy
systemsintosimulationwithSIMULINK.


ItispossibletocreatestandaloneCprogramsthatcallonfuzzysystemsbuiltwithMATLAB.
TheToolboxprovidesthreecategoriesoftools:
Commandlinefunctions,
Graphicalorinteractivetools,
Simulinkblocks.

COMMANDLINEFISFUNCTIONS

Addmf AddmembershipfunctiontoFIS
addrule AddruletoFIS.
addvar AddvariabletoFIS.
defuzz Defuzzifymembershipfunction.
evalfis Performfuzzyinferencecalculation
evalmf Genericmembershipfunctionevaluation.
gensurf GenerateFISoutputsurface.
getfis Getfuzzysystemproperties.
mfstrtch Stretchmembershipfunction.
newfis CreatenewFIS.
plotfis DisplayFISinputoutputdiagram.
plotmf Displayallmembershipfunctionsforonevariable.
readfis LoadFISfromdisk.
rmmf RemovemembershipfunctionfromFIS
rmvar RemovevariablefromFIS.
Setfis Setfuzzysystemproperties.
showfis DisplayannotatedFIS.
Showrule DisplayFISrules
writefis SaveFIStodisk.

MEMBERSHIPFUNCTIONS
Dsigmf Differenceoftwosigmoidmembershipfunctions.
gauss2mf TwosidedGaussiancurvemembershipfunction.
gaussmf Gaussiancurvemembershipfunction.
gbellmf Generalizedbellcurvemembershipfunction.
pimf Pishapedcurvemembershipfunction.
psigmf Productoftwosigmoidmembershipfunctions.
smf Sshapedcurvemembershipfunction.
sigmf Sigmoidcurvemembershipfunction.
trapmf Trapezoidalmembershipfunction.
trimf Triangularmembershipfunction.
zmf Zshapedcurvemembershipfunction.

GRAPHICALUSERINTERFACEEDITORS(GUITOOLS)

anfisedit ANFIStrainingandtestingUItool.

findcluster ClusteringUItool.
fuzzy BasicFISeditor.
mfedit Membershipfunctioneditor.
ruleedit Ruleeditorandparser.
ruleview Ruleviewerandfuzzyinferencediagram.
surfview Outputsurfaceviewer.

FISEDITOR(MAMDANI)

FIS
EDITOR(SUGENO)

FISMEMBERSHIPFUNCTIONEDITOR

FISRULEEDITOR

FISRULEVIEWER

FIS
SURFACEVIEWER

SIMULINKBLOCKS

Once fuzzy
system is
created using GUI toolsor some other method,can be directlyembedded into SIMULINK usingthe
FuzzyLogicControllerblock.

MEMBERSHIPSIMULINKBLOCKS

This Toolbox includes 11 builtin membership function types, built from several basic
functions:piecewise linear functions (triangularand trapezoidal),the Gaussian distribution
function(gaussian curves and generalized bell), the sigmoid curve, andquadraticand
cubicpolynomialcurves(Z,S,andPicurves).

ADVANCEDTECHNIQUES

anfis TrainingroutineforSugenotypeFIS(MEXonly).
fcm Findclusterswithfuzzycmeansclustering.
genfis1 GenerateFISmatrixusinggenericmethod.
genfis2 GenerateFISmatrixusingsubtractiveclustering.
subclust Estimateclustercenterswithsubtractiveclustering.

CONCLUSION

ThepresentationhasgivenanoverviewoffuzzylogictoolboxinMATLAB.
Object:11WriteaMATLABprogramtoimplementFuzzySetoperation&properties.

Programforfuzzysetwithpropertiesandoperations

Clearall
Clc
disp(Fuzzysetwithpropertiesandoperation)
a=[010.50.40.6]
b=[00.50.70.80.4]
c=[0.30.90.201]
phi=[000001]
disp(Unionofaandb)
au=max(a,b)
disp(Interactionofaandb)
iab=min(a,b)
disp(Unionofbanda)

bu=max(b,a)
if(au==bu)
disp(Commutativelawissatisfied)
else
disp(Commutativelawisnotsatisfied)
end
disp(Unionofbandc)
cu=max(b,c)
disp(aU(bUc))
acu=max(a,cu)
disp((aUb)Uc))
auc=max(au,c)
if(acu=auc)
disp(Associatedlawissatisfied)
else
disp(Associatedlawisnotsatisfied)
end
disp(intersectionofbandc)
ibc=min(b,c)
disp(aU(bIc))
dls=max(a,ibc)
disp(Unionofaandc)
uac=max(a,c)
disp((aUb)I(aUc))
drs=min(au,uac)
if(dls==drs)
disp(Distributedlawissatisfied)
else
disp(Distributedlawisnotsatisfied)
end
disp(aUa)
idl=max(a,a)
if(idl==a)
disp(Idempotencylawissatisfied)
else
disp(Idempotencylawisnotsatisfied)
end
if(idtl==phi)
disp(identitylawissatisfied)
else
disp(identitylawisnotsatisfied)
end
disp(complementof(aIb))
fori=1:5
ciab(i)=1iab(i)

end
ciab
disp(complementofa)
fori=1:5
ca(i)=1a(i)
end
ca
disp(complementofb)
fori=1:5
cb(i)=1b(i)
end
cb
disp(complementUbcomplient)
ifa==cca
disp(evolutionlawissatisfied)
else
disp(evolutionlawissatisfied)
end

Object:12.WriteaprogramtoimplementcompositiononFuzzyandCrisprelations.

ProgramforcompositiononFuzzyandCrisprelations

clearall
clc
disp(compositiononCrisprelation)
a=[0.20.6]
b=[0.30.5]
c=[0.60.7]
for=1:2
r(i)=a(i)*b(i)
s(i)=b(i)*c(i)
end

r
s
irs=min(r,s)
disp(crispcompositionofrandsusingmaxmincomposition)
crs=max(irs)
fori=1:2
prs(i)=r(i)*s(i)
end
prs
disp(disp(crispcompositionofrandsusingmaxproductcomposition)
mprs=max(prs)
disp(fuzzycomposition)
firs=minr,s
disp(fuzzycompositionofrandsusingmaxmincomposition)
frs=max(firs)
fori=1:2
fprs(i)=r(i)*s(i)
end
fprs
disp(compositionofrandsusingmaxproductcomposition)
fmprs=max(fprs)

Object:13.Considerthefollowingfuzzysets

A=
B=

Programtofindunion,intersectionandcomplementoffuzzyssets
%EnterthetwoFuzzysets

u=input(enterthefirstfuzzysetA)
v=input(enterthesecondfuzzysetB)
disp(UnionofAandB)
w=max(u,v)
disp(IntersectionofAandB)
p=min(u,v)
[m]=size(u)
disp(ComplementofA)
q1=ones(m)u
[n]=size(v)
disp(ComplementofB)
q2=ones(n)v

Output
enterthefirstfuzzysetA[10.40.60.3]
enterthesecondfuzzysetB[0.30.20.60.5]
UnionofAandB
w=
1.00000.40000.60000.5000
IntersectionofAandB
p=
0.30000.20000.60000.3000
ComplementofA
q1=
00.60000.40000.7000
ComplementofB
q2=
0.70000.80000.40000.5000

Object:15.WriteaMATLABprogramformaximizingf(x)=x
2
usingGA.Wherexisrange
from0to31.Perform5iterationsonly.

ProgramforGeneticalgorithmtomaximizethefunction

f(x)=xsquare
clearall

clc
%xrangefrom0to312power5=32
%fivebitsareenoughtorepresentxinbinaryrepresentation
N=input(Enterno.ofpopulationineachiteration)
Nit=input(Enterno.ofiterations)
%Generatetheinitialpopulation
[oldchrom]=initbp(n,5)
%Thepopulationinbinaryisconvertedtointeger
FieldD=[50310011]
fori=1:nit
phen=bindecod(oldchrm,FieldD,3)%phengivetheintegervalueofthepopulation
%obtainfitnessvalue
Sqx=phen.^2
Sumsqx=sum(sqx)
avsqx=sumqx/n
hsqx=max(sqx)
pselect=sqx./sumqx
sumpselect=sum(pselect)
avpselect=sumpselect/n
hpselect=max(pselect)
%applyroulettewheelselection
FitnV=sqx
Nsel=4
Newchrix=selrws(FitnV,Nsel)
Newchron=oldchron(newchrix,:)
%performCrossover
Crossover=1
Newchromc=recsp(newchrom,crossrate)
%newpopulationaftercrossover
%Performmutation
Vlub=0:31
Mutrate=0.001
Newchrom=mutrandbin(newchromc,vlub,0.001)
%newpopulationaftermutation
disp(Foriteration)
i
disp(population)
oldchrom
disp(X)
phen
disp((X))
sqx
oldchrom=newchromm
end

Anda mungkin juga menyukai