Anda di halaman 1dari 8

A relatively new type of information processing which has only seen limited used in chemistry.

Originally developed as a way to mimic the human brain, this research area now includes a range of parallel distributed processing methods.

Neural networks
Potential advantages of the approach ! Ability to learn new things with existing net. ! Can automatically adjust to new stimuli. ! Exact rules are not required. ! Adding poor data degrades net slowly.
!

Neural Networks

Current disadvantages ! Not many applications in chemistry to date. ! Only useful for classication - so far. ! Typically slow to develop and train

Neural networks
We will only be introducing some general concepts in this unit. ! ! ! ! Network component Network topography General training approaches Some example network types

Network nodes
A network consists of a series of nodes.
site functions and weights
input from previous layer

om m om m om m

wnm nm wnm nm wnm nm bias

activation function

output functions On n

f(! i w)

M M

On n On n

Hopefully, this will give you at least a basic understanding of the area.

Node example

Network nodes
Depending on the network and node type, not all types of functions may be used. om output from previous unit w weight applied to om bias additional weight applied to select inputs or layers. Responses are then fed to an activation function.

Logistic activation function


This is one of the most common activation function types.

a j (t + 1) =
1

1 + e -( ! w
i

ij

o i (t) - i j )

Function output will result in a simple on/off condition.

Activation functions
Other types of functions have been used but for our limited coverage, one example is enough. Object of the function is to sum the responses from the previous units to which it is linked and produce an output response - usually on (1) or off (0) This output is then sent to to other nodes.

Types of nodes
Input Information obtained from external source. Output Information sent to external source. Dual Combination input and output node. Hidden Node in an internal layer. No external interaction.

A simple network

The goal is to present the network with a range of input patterns and obtain the desired output. One of the most common approaches for training a network is backpropagation of a feedforward neural network. Lets review the basic steps in this approach.

External inputs

Network output

input layer

hidden layer

output layer

Learning in neural networks

Neural network training


Present pattern
forward propagation phase

Measure response of output layer Calculate difference between output layer and training value

This process is repeated for each pattern in the training set in series. The entire set is evaluated repeatedly until the net is trained to a satisfactory level.

As with other classication methods, you can use a separate training and evaluation set or crossvalidation. It is assumed that the patterns presented contain information that can be used for classication. Other methods of data evaluation can be used prior to training to insure that a good net can be produced.

backpropagation phase

Adjust weights to obtain desired output

Neural network training

Types of networks
Many types of network models have been proposed and studied We will just look at a few examples. ! Backpropagation ! Dynamic learning vector quantization ! Self-organizing maps This should give you an idea as to how widely the approaches can vary.

Free Neural Network Software


What could be better than free? Here are the two systems well be looking at choices.

tlearn
http://crl.ucsd.edu/innate/tlearn.html Mac, Windows and UNIX versions. Backpropagation type networks. PCA and clustering data also provided. A bit dated but simple to use. Well use it for one of our examples.

JavaNNS
Java Neural Network Simulator
http://www.ra.cs.uni-tuebingen.de/software/JavaNNS/welcome_e.html

Backpropagation
This is one of the most common approaches which we have already outlined. A typical network consists: ! Input layer - one function / variable ! Output layer - one function / class ! Hidden layers - optional During training, the weights are updated after each training pattern is presented.

Works on Windows, Mac and UNIX platforms. Supports several types of NN models including:
backpropagation counter propagation dynamic learning vector quantization Kohonen ART Autoassociative memory and many others.

Backpropagation
input layer - 6 variables output layer - 4 classes

Arson example Same data set we looked at before with 5 classes and 19 variables. Variables were range normalized to 0-1. Classes were encoded as:
Class 1 - 000! Class 2 - 001! Class 3 - 010 ! ! Class 4 - 011 Class 5 - 100

Well use tlearn for this example


Backpropagation

two hidden layers

Arson example

Arson example
Output layer

Hidden layer

Input layer

Files used for design of the network, data and classes

The .cf le is used to specify a three layer network.

Dendrograms

PCA plots

Shows if the data tends to fall into clusters and be classied.

Its pretty much the same as what we observed when the Arson data le was used earlier.

3D PCA plots

Arson example

The program was then instructed to go through 50,000 training cycles.

Arson example

Arson example

The activity display will indicate which inputs cause each node to re and its impact on the nal pattern.

JavaNNS example
Finally, we can evaluate how well our network works. Here we just used the original training set to see how well each sample is classied.

Lets return to the Iris problem and see if a neural network can do the classication. To make the software happy, the data was normalized on a 0-1 scale -- variable by variable basis. Classes were assigned as: 0 0 I. Setosa 0 1 I.Verginica 1 0 I.Verginica

Arson example

Setting options
Initial network with random weights.

Process for 50,000 cycles

Results
Example results for the three classes. The brighter the green, the bigger the value.

Input or hidden units that do no contribute to the classication can be removed.

Data subset and results.

Pruning

Dynamic learning vector quantization


This approach attempts to nd natural grouping in a set of data. The assumption is that a data vector can be found that best classies related samples. The vectors are selected that not only best classify related samples but maximize the distance between unrelated ones. The end result is very similar to clustering based on PCA and Varimax rotation - very SIMCA like.

Topology of a DLVQ - initial


input layer
1 2 3 4 5 6 7 8 9 15 22 29 36 16 23 30 37

class vectors

output layer

10 17 24 31 38 11 18 25 32 39 12 19 26 33 40 13 20 27 34 41 14 21 28 35 42

This example shows an initial setup with 42 input variables, 5 possible classes and our output (answer) layer.

DLVQ steps
The normalized training set is loaded and the mean vector for each class determined. Each pattern out of the training set is evaluated with the reference vector. Vectors are adjusted towards samples in a class and away from other samples. The process is completed until the number of correctly classied samples does not increase.

DLVQ
1 2 3 4 5 6 7 8 9 15 22 29 36 16 23 30 37

10 17 24 31 38 11 18 25 32 39 12 19 26 33 40 13 20 27 34 41 14 21 28 35 42

Once completed, the nal class vectors may vary in size. Some classes may be easier to identify than others.

DLVQ
In order to work, the input patterns for each class share some similarities. There must also be something different about each class. The model must be rebuilt if a new class is discovered or a training sample was misidentied.

Self-organizing maps
Self-organizing maps (SOM) are also called Kohonen feature maps. This is an unsupervised learning method. It will cluster related input data. It will maintain spacial ordering so you have some idea as to the relationships between your patterns.

Self-organizing maps
SOM systems consist of two layers of units. A one dimensional input layer A two dimensional competitive layer organized as a 2-D grid of units. There are no hidden or output layers.
input layer competitive layer

Self-organizing maps
Training The competitive layer is initialized with normalized vectors. The input pattern vectors are presented to the competitive layer and the best match (nearest unit) is chosen as the winner. Topicalogical ordering is achieved using a spatial neighborhood relationship between competitive units.

Self-organizing maps
Once trained, each sample will produce a pattern map on the competitive layer. It can be used to visualize how your samples relate to each other.

Anda mungkin juga menyukai