Anda di halaman 1dari 31

14/11/2017

Chapter 5 Artificial Neural


Network
Dr. Mohamad Hafis Izran Bin Ishak
Department of Control & Mechatronics (CMED)
Faculty of Electrical Engineering
Universiti Teknologi Malaysia.

MKEM 1713 Artificial Intelligence


www.utm.my
Dr. Hafis innovative innovative
entrepreneurial
SKEM4173 entrepreneurial global
global

Chapter 5
Artificial Neural
Networks (ANNs)
Introduction to ANN
ANN models and examples of
application
Simple neural networks: McCulloch-
Pitts Neuron, Hebb, Perceptrons

Dr. Hafis SKEM4173 innovative entrepreneurial global 2


14/11/2017

Objectives
The sub-topic has been organized to have the following
objectives:
To understand the broad concept of artificial
intelligence and artificial neural networks.
To know the possible applications of artificial neural
networks (ANN).
To understand the capabilities and limitations of ANN.
To understand the underlying concepts of several ANN
paradigms.
To be familiar with at least one ANN software for the
development of ANN applications.
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 3

What do I need to understand ANN?

Some background on Mathematical theory:


chain rule, differential equations, partial
derivatives, probability theory, etc.

Discrete-time equations: sampling interval, etc.

Programming language

Some particular ANN algorithms

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 4
14/11/2017

5.1 Overview of Artificial Neural Networks


What is an Artificial Neural Network (ANN)?
Artificial neural network or ANN is the term used to
describe a computer model assumption of the biological
brain.

It consists of a set of interconnected simple processing


units (neurons or nodes) which combine to output a
signal to solve a certain problem based on the input
signals it received.

The interconnected simple processing units have


adjustable gains that are slowly adjusted through
iterations influenced by the input-output patterns given
to the ANN.
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 5

An ANN is an information-processing system that has


certain performance characteristics in common with
biological neural networks.
Basically it is a system that handles many input signals,
processes them, and output them to solve a task that it
has been trained to solve.

Other terms used in the 1 n

literature: x1 a1 O1

connectionist models x2 a2 O2

parallel distributed

processors
xj
1
Wj k
ak
Wkm
n
Om

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 6
14/11/2017

Another Definition

ANNs have been developed as generalizations of


mathematical models of human cognition or neural
biology, based on the assumptions that:

Information processing occurs at many simple elements


called neurons (or nodes, or processing elements, or units)
Signals are passed over connection links.
Each connection link has an associated weight, which, in a
typical ANN, multiplies the signal transmitted.
Each neuron applies an activation function (usually
nonlinear) to its net input to determine its output signal.
A learning rule exists to adapt the weights to solve a
particular tasks (engineering, business, etc.)

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 7

The study of Neuro-Models


encompasses many disciplines, such as:

Neuroscience
Psychology
Philosophy
Computer Science
Mathematics
Electrical/Electronics

Dr. Hafis SKEM4173 innovative entrepreneurial global 8


14/11/2017

Dendrites

Soma
Nucleus Axon

Synapse

AN ARTIFICIAL NEURON
oi wji
wji
oi f(netj) oj
wji
oi

Dr. Hafis SKEM4173 innovative entrepreneurial global 9

Biological Neuron

Artificial Neuron
x1
W11 sum = W 11x + W 12x +...+ W1NxN
1 2
x2
W12
x3 f()

S
W13

W1jxj O

1
W1N

xN

Neural Networks
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Some KEY Words related to ANN


Neuro-models
Learning / Teaching Phase
Activation functions
Model parameters
Training Data
Inputs / Outputs
Error
Weights
Adaptation
Test / Simulation
Application

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 11

5.2 Application of ANNs


ANNs are learning systems that can be applied into
many types and areas of applications.
Basically an ANN maps one function into another and
they can be applied to perform the following:

Pattern recognition
Pattern matching
Pattern classification
Pattern completion
Prediction
Clustering
Decision making
Control
??
Dr. Hafis SKEM4173 innovative entrepreneurial global 12
14/11/2017

Some examples of areas of applications


of ANN
Engineering (perhaps largest)
electrical, mechanical, civil, etc.
robotics
industrial systems
Business
finance/banks
stocks
banks
database and data-mining
Transportation
scheduling of transports
Medicine
ECG monitoring
Blood classification
Remote sensing
Topological mapping
Digital maps
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 13

An Example of an ANN application for the


recognition of numbers from 0 to 9.

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 14
14/11/2017

Wood Recognition by Neural Networks

Bintangor

Durian

Nyatoh

Ramin

Dr. Hafis SKEM4173 innovative entrepreneurial global 15

Plant Optimization

INPUTS LAB
PLANT
F,T,P
CA,CB
On-Line Outputs RESULTS
TIME (2-4 Hours)
MODEL
DELAY

REAL-TIME PREDICTION
ON-LINE ANALYSIS

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 16
14/11/2017

Handwriting Recognition: 2 Types On-


line and Off-line information
Off-line Handwriting : Image of a digitized document
available as a 2-dimension bitmap : I(x,y)
On-line Handwriting : Dynamic trajectory traced by a stylus
1-dimension information, temporal sampled : x(t), y(t)
Complementarities of these two signals :

0?
Pen
Pen D? j8?2
Image
trajectory
trajectory

Combination of the two signals : TDNN + SDNN

Dr. Hafis SKEM4173 innovative entrepreneurial global

Input device Technologies

*Courtesy of Vision Objects

Sensitive screen : Camera-pen + Doppler Pen


PDA, PC Tablet Specific textured (related displacement )
Paper (Anoto) no paper dependant

Dr. Hafis SKEM4173 innovative entrepreneurial global 18


14/11/2017

Recognition technologies

six

Statistical pattern recognition approaches


Neural Network : MLP, Convolutional NN dix
Kernel based methods : SVM
Stochastic models : Hidden Markov Models (HMMs)
Hybrid System (NN/SVM + HMM)
Dr. Hafis SKEM4173 innovative entrepreneurial global 19

5.3 ANN Models

Generally, an ANN is categorized by:


Its pattern of connections between the
neurons (also called its architecture or
model).
Model
the activation function used in the neurons.
Its learning algorithm (which is the method of
determining its weights). 1 n
x1 a1 O1

x2 a2 O2


xj ak Om
1 n
Wj k Wkm

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 20
14/11/2017

Usually ANNs can be categorized into 3 models:

Feedforward
All signals flow in one direction only, i.e. from lower
layers (input) to upper layers (output).

Feedback
Signals from neurons in upper layers are fed back to
either its own or to neurons in lower layers.

Cellular
Connected in a cellular manner .
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 21

Input layer Hidden layer Output layer


Weights

A fully-connected feedforward network

Input layer Hidden layer Output layer


Weights

A feedback or recurrent network

Dr. Hafis SKEM4173 innovative entrepreneurial global 22


14/11/2017

5.4 Learning In ANNs

In all of the neural paradigms, the


application of an ANN involves two phases:
(1) Learning phase
(2) Recall phase

In the learning phase (usually offline)


the ANN is trained until it has
learned its tasks (through the
adaptation of its weights) while the
recall phase is used to solve the task.

Dr. Hafis SKEM4173 innovative entrepreneurial global 23

An ANN solves a task when its weights are adapted


through a learning phase.

All neural networks have to be trained before they can be


used.

They are given training patterns and their weights are


adjusted iteratively until an error function is minimized.

Once the ANN has been trained no more training is


needed.

Two types of learning prevailed in ANNs:


Supervised learning:- learning with teacher signals or
targets
Unsupervised learning:- learning without the use of
teacher signals
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 24
14/11/2017

5.4.1 Supervised Learning

In supervised learning the training patterns are provided


to the ANN together with a teaching signal or target.

The difference between the ANN output and the target is


the error signal.

Initially the output of the ANN gives a large error during


the learning phase.

The error is then minimized through continuous


adaptation of the weights to solve the problem through a
learning algorithm.

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 25

In the end when the error becomes very small, the ANN is
assumed to have learned the task and training is stopped.

It can then be used to solve the task in the recall phase.

Output
Input - +
Target
Patterns

Weights are Error


adapted iteratively

Learning configuration

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 26
14/11/2017

5.4.2 Unsupervised learning

In unsupervised learning, the ANN is trained without


teaching signals or targets.

It is only supplied with examples of the input patterns that


it will solve eventually.

The ANN usually has an auxiliary cost function which


needs to be minimized like an energy function, distance,
etc.

Usually a neuron is designated as a winner from


similarities in the input patterns through competition.

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 27

The weights of the ANN are modified where a cost


function is minimized.

At the end of the learning phase, the weights would


have been adapted in such a manner such that
similar patterns are clustered into a particular node.

Competitive
layer Winner
Neuron
Adaptation of weights
Input layer is measured by an
auxilliary cost function

Input Patterns

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 28
14/11/2017

5.5 ANN Paradigms and Classifications


There are a number of ANN paradigms developed over the past
few decades.

These ANN paradigms are mainly distinguished through their


different learning algorithms rather than their models.

Some ANN paradigms are named after their proposer such as


Hopfield, Kohonen, etc.

Most ANNs are named after their learning algorithm such as


Backpropagation, Competitive learning, Counter propagation,
ART, etc. and some are named after their model such as BAM,

Basically a particular ANN can be divided into either a


feedforward or a feedback model and into either a supervised
or unsupervised learning mode.
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global 29

ANN Classifications

NN Model
Feedforward Feedback
Least Mean Square
Backpropagation
Supervised

Reinforcement Recurrent
Learning Backpropagation
Fuzzy ARTMAP
GRNN
Adaptive Resonance
Unsupervised

Self-Organizing Maps Theory


Competitive Learning Fuzzy ART
Counter Propagation Boltzmann Learning
Hopfield Network
BAM

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 30
14/11/2017

ANN Performance

The performance of an ANN is described by the figure


of merit, which expresses the number of recalled
patterns when input patterns are applied, that could
be complete, partially complete, or even noisy.

A 100% performance in recalled patterns means that


for every trained input stimulus signal, the ANN
always produces the desired output pattern.

Dr. Hafis SKEM4173


SKEM4173 innovative entrepreneurial global 31

ANN PERFORMANCE

000 = MOUSE
001 = RABBIT

010 = COW

Dr. Hafis SKEM4173 innovative entrepreneurial global 32


14/11/2017

5.5.1 Biases and Thresholds

A bias acts exactly like a weight.

It is considered as a connection and its activation is always


1.

It is then adapted similarly to the way a weight is adapted


according to the learning rule of the ANN.

Its use is to increase signal levels in the ANN such as to


improve convergence.

Some ANN do not use any bias signals.


33
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

A neural network input S with a bias signal can be


written as follows:

S b xi wi
1 b

W1

S
x1
y

W2

x2

34
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

A threshold (q) is a value that is used to make some form


of decisions in an ANN such that the ANN will fire or
unfire.
It is quite similar to a bias but is not adapted.
An example of a binary threshold is given as follows:

o
q

Then the equation of the separating line becomes:

b x1w1 x2 w2 q
Dr. Hafis SKEM4173 innovative entrepreneurial global 35

5.6 Simple Neural Networks


Introduction
Also called primitive neural networks.
Mainly used as pattern classifiers.
Usually are single layer in architecture.
Used in the 40s-60s for simple applications such as
membership in a single-class (i.e. either in or out).

Input
Pattern Output

YES

NO

36
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Several examples of these neural networks


McCulloch-Pitts neuron ~ 1st artificial neuron

Hebb net ~ 1st implementation of learning in


neural nets
The Perceptron
ADALINE and MADALINE
Some examples of applications of these nets are
Detection of heart abnormalities with ECG data as

inputs in 1963 (Specht, Widrow). It has 46 input


measurements and the output was either normal
or abnormal.
For echo-cancellation in telephone lines (Widrow).
Minsky and Papert used the Perceptrons for
connected or not-connected pattern
classifications.
37
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

5.6.1 The McCulloch-Pitts Neuron


Uses only binary activation signals.
Connected by directed, weighted paths.
A connection path is:
excitatory if the weight is positive
inhibitory if otherwise

All excitatory connections into a particular neuron have the same


weights.
Neuron will fire when input into the neuron is greater than the
threshold.
It takes 1 time step for a signal to pass over 1 connection link.
No learning ~ weights are assigned rather than adapted.
Neural network is implemented in hardware such as relays and
resistors.

38
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Architecture of the McCulloch-Pitts Neuron:

1 if s >= q
f(s) =
0 if s < q

x1 sum = W 11x1 + W 12 x2+...+W 1NxN


W11
x2 W12

S
f( )
W13 1
x3 W1jxj Oj
o
q
W1N N
xN
Oj = f ( SW
i =1
ij xj )
39
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Example to solve an AND function

x1 x2 y
q, threshold for y is 2
1 1 1
1 0 0
0 1 0
0 0 0
x1 W=1

S Wij
1

o y
W=1 2
x2
y= f ( SW ij xj )

Dr. Hafis SKEM4173 innovative entrepreneurial global 40


14/11/2017

Example to solve an OR function


x1 x2 y
1 1 1
1 0 1
q, threshold for y is 2 0 1 1
0 0 0

x1 W=2

S Wij
1

2
o
y
W=2

x2
y= f ( SW ij xj )

Dr. Hafis SKEM4173 innovative entrepreneurial global 41

Example to solve an XNOR function

x1 x2 y
1 1 1
1 0 0
0 1 0
0 0 1

w=2 z1=x1 AND NOT x2


x1 z1 w=2
w=-1
y y z2=x2 AND NOT x1
w=-1
x2 w=2
w=2 z2
y=z1 OR z2
Units z1, z2 and y is 2 each.

Dr. Hafis SKEM4173 innovative entrepreneurial global 42


14/11/2017

5.6.2 The Hebb Neural Network

Developed by Donald Hebb, a psychologist, in 1949.

Actually he developed the first learning algorithm for


neural networks where weights are adapted iteratively.

It can be used with patterns that are represented as either


binary or bipolar vectors.

Has a number of limitations and cannot be used in real


world applications.

43
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Hebb Learning Algorithm

Step 0. Set up the NN model (depending on the task to be solved)


(Model determines number inputs, outputs, biases, thresholds, etc..)
Initialize all weights

wi 0 i 1 to n
Step 1. For each input training vector and target output pair:
ui (u1, u2, u3,); t
do Steps 2-4.
Step 2. Set activations for input units :

xi ui i 1 to n
Step 3. Set activation for output unit :

yt
44
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Hebb Learning Algorithm

Step 4. Adjust the weights for


wi new wi old xi y i 1 to n
Adjust the bias :
bnew bold y
Note that the bias is adjusted exactly like a weight from a unit whose
input signal is always 1. The weight update can also be expressed in
vector form as

wnew wold xy
This is often written in terms of the weight change, w as

w xy
and
wnew wold w

Dr. Hafis SKEM4173 innovative entrepreneurial global 45

Exercise 5.1 wnew wold xy


1 b
u1 u2 t
1 1 1 x1 W1
1
0
0
1
1
1
W2
S y

0 0 0
x2
Using the Hebb network, solve the OR problem above and show the
results of the weights adaptation for each pattern over 1 epoch.
Fill up the following table.

Iter# x1 x2 t y w1 w2 b

0 0 0
46
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

5.6.3 The Perceptrons


Developed by Frank Rosenblatt (1958).

Its learning rule is superior than the Hebb learning rule.

Has been proven by Rosenblatt that the weights can converge on


particular applications.

However, the Perceptron does not work for nonlinear applications


as proven by Minsky and Papert (1969).

Activation function used is the binary step function with an


arbitrary, but fixed threshold.

Weights are adjusted by the Perceptron learning rule:

wi ( new ) wi ( old ) Tx i

47
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

The Perceptron Algorithm

Step 0. Set up the NN model (which follows the problem to be solved)


Initialize weights and bias.
(For simplicity, set weights and bias to zero or randomize)
Set learning rate, 0<<1 and threshold 0<q<1
(For simplicity, can be set to 1.)

Step 1. While stopping condition is false, do Steps 2-6.


Step 2. For each training pair u:t, do Steps 3-5.
Step 3. Set activations of input units:

xi ui .
Step 4. Compute response of output unit:

s b xi wi
i
1 if s q
q is a threshold value
y 0 if -q s q
assigned between 0 and 1
1 if s < - q
48
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

The Perceptron Algorithm

Step 5 Update weights and bias if an error occurred for this pattern:

If y t
w i new wi old txi .
b new b old t.
else
w i new wi old ,
b new b old .

Step 6. Test stopping condition :


If no weights changed in Step 2, stop; else, continue.

Dr. Hafis SKEM4173 innovative entrepreneurial global 49

Exercise 5.2a
1
x1 x2 t b
1 1 1 x1 W1
1
-1
-1
1
-1
-1
W2
S y

-1 -1 -1
x2
Using the Perceptron network, solve the AND problem above (using
bipolar activations) and show the results of the weights adaptation for
each pattern over 1 epoch.
Choose all initial weights to be -1, =0.5, and q=0.2
Fill up the following table.

Iter# x1 x2 s y t w1 w2 b

Dr. Hafis SKEM4173 innovative entrepreneurial global 50


14/11/2017

Exercise 5.2b
x1 x2 x3 t
-1 -1 -1 -1 1
b
-1 -1 1 1
-1 1 -1 1 x1 W1

-1 1 1 1 y
x2
f
W2
1 -1 -1 1 W3
1 -1 1 1 x3
1 1 -1 1
1 1 1 1
Using the Perceptron network (3-input), solve the OR problem above
(using bipolar activations) and show the results of the weights
adaptation for each pattern over 1 epoch.
Choose all initial weights to be -0.5, =0.25, and q=0.1
Fill up the following table. 51
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Iter. #1 (-1 -1 -1 t=-1) : s = 1 + (-0.5*-1) + (-0.5*-1) + (-0.5*-1)


s = 1 + 0.5 + 0.5 + 0.5 = 2.5
y = 1 (2.5>0.1)
t = -1

y != t, hence Adapt weights & bias


w1(new) = w1(old) + tx1
= -0.5 + (0.25*-1*-1)
= -0.25
w2(new) = w2(old) + tx2
= -0.5 + (0.25*-1*-1)
= -0.25
w3(new) = w3(old) + tx3
= -0.5 + (0.25*-1*-1)
= -0.25
b(new) = b(old) + t
= 1 + (0.25*-1)
= 0.75

52
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Iter. #2 (-1 -1 1 t=1) : s = 0.75 + (-0.25*-1) + (-0.25*-1) + (-0.25*1)


s = 0.75 + 0.25 + 0.25 - 0.25 = 1
y = 1 (1>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75

Iter. #3 (-1 1 -1 t=1) : s = 0.75 + (-0.25*-1) + (-0.25*1) + (-0.25*-1)


s = 0.75 + 0.25 + 0.25 - 0.25 = 1
y = 1 (1>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75
53
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Iter. #4 (-1 1 1 t=1) : s = 0.75 + (-0.25*-1) + (-0.25*1) + (-0.25*1)


s = 0.75 + 0.25 - 0.25 - 0.25 = 0.5
y = 1 (0.5>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75

Iter. #5 (1 -1 -1 t=1) : s = 0.75 + (-0.25*1) + (-0.25*-1) + (-0.25*-1)


s = 0.75 - 0.25 + 0.25 + 0.25 = 1
y = 1 (1>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75
54
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Iter. #6 (1 -1 1 t=1) : s = 0.75 + (-0.25*1) + (-0.25*-1) + (-0.25*1)


s = 0.75 - 0.25 + 0.25 - 0.25 = 0.5
y = 1 (0.5>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75

Iter. #7 (1 1 -1 t=1) : s = 0.75 + (-0.25*1) + (-0.25*1) + (-0.25*-1)


s = 0.75 - 0.25 - 0.25 + 0.25 = 0.5
y = 1 (0.5>0.1)
t=1

y = t, hence No Adaptation
w1(new) = w1(old) = -0.25
w2(new) = w2(old) = -0.25
w3(new) = w3(old) = -0.25
b(new) = b(old) = 0.75
55
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Iter. #8 (1 1 1 t=1) : s = 0.75 + (-0.25*1) + (-0.25*1) + (-0.25*1)


s = 0.75 - 0.25 - 0.25 - 0.25 = 0
y = 0 (-0.1<0<0.1)
t=1

y != t, hence Adapt weights & bias


w1(new) = w1(old) + tx1
= -0.25 + (0.25*1*1)
= 0
w2(new) = w2(old) + tx2
= -0.25 + (0.25*1*1)
= 0
w3(new) = w3(old) + tx3
= -0.25 + (0.25*1*1)
= 0
b(new) = b(old) + t
= 0.75 + (0.25*1)
= 1

56
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Epoch #1

x1 x2 x3 s y t w1 w2 w3 b

- - - -0.5 -0.5 -0.5 1


-1 -1 -1 2.5 1 -1 -0.25 -0.25 -0.25 0.75
-1 -1 1 1 1 1 -0.25 -0.25 -0.25 0.75
-1 1 -1 1 1 1 -0.25 -0.25 -0.25 0.75
-1 1 1 0.5 1 1 -0.25 -0.25 -0.25 0.75
1 -1 -1 1 1 1 -0.25 -0.25 -0.25 0.75
1 -1 1 0.5 1 1 -0.25 -0.25 -0.25 0.75
1 1 -1 0.5 1 1 -0.25 -0.25 -0.25 0.75
1 1 1 0 0 1 0 0 0 1

57
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Epoch #2

x1 x2 x3 s y t w1 w2 w3 b

- - - 0 0 0 1
-1 -1 -1 1 1 -1 0.25 0.25 0.25 0.75
-1 -1 1 0.5 1 1 0.25 0.25 0.25 0.75
-1 1 -1 0.5 1 1 0.25 0.25 0.25 0.75
-1 1 1 1 1 1 0.25 0.25 0.25 0.75
1 -1 -1 0.5 1 1 0.25 0.25 0.25 0.75
1 -1 1 1 1 1 0.25 0.25 0.25 0.75
1 1 -1 1 1 1 0.25 0.25 0.25 0.75
1 1 1 1.5 1 1 0.25 0.25 0.25 0.75

58
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Epoch #3

x1 x2 x3 s y t w1 w2 w3 b

- - - 0.25 0.25 0.25 0.75


-1 -1 -1 0 0 -1 0.5 0.5 0.5 0.5
-1 -1 1 0 0 1 0.25 0.25 0.75 0.75
-1 1 -1 0 0 1 0 0.5 0.5 1
-1 1 1 2 1 1 0 0.5 0.5 1
1 -1 -1 0 0 1 0.25 0.25 0.25 1.25
1 -1 1 1.5 1 1 0.25 0.25 0.25 1.25
1 1 -1 1.5 1 1 0.25 0.25 0.25 1.25
1 1 1 2 1 1 0.25 0.25 0.25 1.25

59
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Epoch #4

x1 x2 x3 s y t w1 w2 w3 b

- - - 0.25 0.25 0.25 1.25


-1 -1 -1 0.5 1 -1 0.5 0.5 0.5 1
-1 -1 1 0.5 1 1 0.5 0.5 0.5 1
-1 1 -1 0.5 1 1 0.5 0.5 0.5 1
-1 1 1 1.5 1 1 0.5 0.5 0.5 1
1 -1 -1 0.5 1 1 0.5 0.5 0.5 1
1 -1 1 1.5 1 1 0.5 0.5 0.5 1
1 1 -1 1.5 1 1 0.5 0.5 0.5 1
1 1 1 2.5 1 1 0.5 0.5 0.5 1

60
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global
14/11/2017

Epoch #5

x1 x2 x3 s y t w1 w2 w3 b

- - - 0.5 0.5 0.5 1


-1 -1 -1 -0.5 -1 -1 0.5 0.5 0.5 1
-1 -1 1 0.5 1 1 0.5 0.5 0.5 1
-1 1 -1 0.5 1 1 0.5 0.5 0.5 1
-1 1 1 1.5 1 1 0.5 0.5 0.5 1
1 -1 -1 0.5 1 1 0.5 0.5 0.5 1
1 -1 1 1.5 1 1 0.5 0.5 0.5 1
1 1 -1 1.5 1 1 0.5 0.5 0.5 1
1 1 1 2.5 1 1 0.5 0.5 0.5 1

61
Dr. Hafis SKEM4173
SKEM4173 innovative entrepreneurial global

Anda mungkin juga menyukai