Anda di halaman 1dari 8

Neural Networks: Models

Neuron : is the basic building block of every neural network model Why there are many neural network models? Because there are various characteristics associated with the neural networks, and by selecting different characteristic values Topology of the network architecture Directions of output Form of learning Types of input values Forms of activation functions

a) Topology of the network architecture 1- Multilayered : There are distinct layers :input, hidden, and output. The neurons within each layer are connected with the neurons of the adjacent layers through directed edges. There are no connections among the neurons within the same layer. Example : backpropagation model 2-Non-multilayered : We can also build neural network without such distinct layers as input, output, or hidden. Every neuron can be connected with every other neuron in the network through directed edges. Every neuron may input as well as output. Example : Hopfield model.

b) Directions of output 1)Non-recurrent (feedforward only) : In the backpropagation model, the outputs always propagate from left to right in the diagrams. This type of output propagation is called feedforward. Neural network models with feedforward only are called non-recurrent Example : Backpropagation model .

note : backpropagation model should not be confused with feedbackward. The backpropagation is backward adjustments of the weights, not output movements from neurons.

2) Recurrent (both feedforward and feedbackward) : In some other neural network models, outputs can also propagate backward, i.e., from right to left. This is called feedbackward. A neural network in which the outputs can propagate in both directions, forward and backward, is called a recurrent model. Example : Biological systems have such recurrent structures. A feedback system can be represented by an equivalent feedforward system

c) Form of learning 1) Supervised learning : the neural network learns under supervision of the teacher. For each input, a teacher knows what should be the correct output and this information is given to the neural network. Example : Backpropagation model .

2)Unsupervised learning : : In some models, neural networks can learn by themselves after being given some form of general guidelines. There is no external comparison between actual and ideal output. Instead, the neural network adjusts by itself internally using certain criteria or algorithms . Example : Kohonen Model , Boltzmann machine.

d) Types of input values We can assume different types of input values. The most common types are: binary : an input value is restricted to either 0 or 1. bipolar : an input value is either -1 or 1. Continuous : continuous real numbers in a certain range.

e) Forms of activation Functions

Linear Backpropagation:

Step

Sigmoid

multilayered, nonrecurrent, supervised Hopfield: non-multilayered, recurrent, supervised Kohonen: multilayered, nonrecurrent, unsupervised Boltzmann machine: non-multilayered, recurrent, supervised/unsupervised

Organization of neural network models Based on the functional characteristics:

Based on specific models, often named after the original developer:

1) Associative Memory (content-addressable Memory ) Computer memory is non-associative , an exact memory address must be specified, and the only information at this address is retrieved. Associative memory is a type of neural network that can map (associate) inputs, which are contents rather than addresses, to information stored in memory. 1. given an input (which may be partial, noisy, or may contain an error). 2. the network can retrieve a complete memory which is the closest match to the input.

For example :

2- Hopfield Networks The Hopfield network model is probably the second most popular type of neural network after the backpropagation model. They can be used as associative memory , and they can also be applied to optimization problems , the version for the associative memory is classified as supervised learning by some authors and as unsupervised by others The basic concept of applying the Hopfield network as associative memory .The basic idea of the Hopfield network is that it can store a set of exemplar patterns as multiple stable states. Given a new input pattern, which may be partial or noisy, the network can converge to one of the exemplar patterns that is nearest to the input pattern. * Architecture Hopfield network consists of a single layer of neurons, 1, 2, ..., n. The network is fully interconnected. Every neuron in the network is connected to every other neuron. The network is recurrent ( feedforward / feedbackward) capabilities, which means input to the neurons comes from external input as well as from the neurons themselves internally. Each input/output, xi or yj, takes a discrete bipolar value of either 1 or -1. The number of neurons, n, is the size required for each pattern in the bipolar representation.

For example: suppose that each pattern is a letter represented by an 10 12 two-dimensional array, where each array element is either 1 for a black square or -1 for a blank square.

Then n will be 10 12 = 120. Each edge is associated by weight, wij, which satisfies the following conditions: wij = wji and wii = 0 for all i = 1, n. for all i, j = 1, n

* Computational procedures Determining wij: Suppose that m exemplar patterns are presented (s = 1, m). Each pattern x has n inputs, x1(s), x2(s), ..., xn(s) , where xk(s) = 1 or -1. Determine wij for i, j = 1 to n by:

values of wij's are not changed ,in backpropagation model wij's are changed as the learning process proceeds the synapse between neurons i and j becomes excitatory if ( xi(s) = 1 and xj(s) = 1 ) or ( xi(s) = -1 and xj(s) = -1) then xi(s) xj(s) = 1 and a positive contribution to wij results the synapse between neurons i and j becomes inhibitory if ( xi(s) = 1 and xj(s) = 1 ) or ( xi(s) = -1 and xj(s) = 1) then xi(s) xj(s) = -1 and a negative contribution to wij results

Energy function :

E always decreases when xi(t) changes the value, and E stays the same when there is no change in xi(t)'s. Basic processing steps : Step 1. Storing exemplar patterns as preprocessing by Determine wij Step 2. Finding the closest matching exemplar pattern to given input representing an unknown pattern. At t = 0, apply given input xi(0), i = 1, n.

Implementation considerations of Step 2 : There are two conditions: 1. Selection of neurons. Each neuron is picked out at uniformly random, independent of other neurons, and at the same average rate. For each neuron xi, the new value updates the value of xi, and will be used in the computation of other neurons (and xi itself, if it is picked up again). 2. Convergence, If and only if a neuron changes its state, then the energy decreases. A solution is converged upon if all the neurons are updated without any change.

methods for examining the conditions: 1. Select i = 1, 2, ..., n, in this order (this is an epoch). Test for convergence. If not converged, then go to the next epoch. This method violates Condition 1 (it is not random). 2. Select i from 1 to n, at a uniformly random rate, independent of other neurons, n times (this is an epoch). Test for convergence. If not converged, then go to the next epoch. This method violates Condition 2 (some neurons may not be updated). 3. Select a unique i every time from 1 to n, in random order, n times; that is, every number between 1 to n is picked up once and only once (this is an epoch). Test for convergence. If not converged, then go to the next epoch. This method can be implemented by randomly permuting numbers 1 to n, then picking out one number at a time in order. This method violates Condition 1 (neurons are not picked out independently of other neurons, because the probabilities of neurons being picked out gets higher when they have not been picked out). 4. Select i from 1 to n, at a uniformly random rate, independent of other neurons, until every neuron is updated at least once (this is an epoch). Test for convergence. If not converged, then go to the next epoch.

Anda mungkin juga menyukai