Anda di halaman 1dari 25

Perceptrons

Perceptrons
First neural network with the ability to learn
Made up of only input neurons and output neurons
Input neurons typically have two states: ON and OFF
Output neurons use a simple threshold activation
function
In basic form, can only solve linear problems thus it
has limited application

How Do Perceptrons Learn?

Uses supervised training

If the output is not correct, the weights are adjusted according


to the formula:
where is the learning rate

Perceptrons
1
0
1

Desired Output = 0

=0
Threshold =1.2
1 * 0.5 + 0 * 0.2 + 1 * 0.8 =
1.3
1.3 > 1.2
= 0.5 + 1*(0-1)*1 = -0.5
= 0.2 + 1*(0-1)*0 = 0.2
= 0.8 + 1*(0-1)*1 = -0.2

The Perceptron Learning


Algorithm
initialize the weights (either to zero or to a small random value)
1.

2. pick a learning rate m ( this is a number between 0 and 1)


3. Until stopping condition is satisfied (e.g. weights don't change):
4. For each training pattern (x, t):

i. compute output activation y = f(w x)


ii. If y = t, don't change weights
iii. If y != t, update the weights:

Multilayer Feedforward Networks


Most common neural network
An extension of the perceptron
Multiple layers
-The addition of one or more hidden layers in between the
input and output layers
Activation function is not simply a threshold
-Usually a sigmoid function
A general function APPROXIMATOR
-Not limited to linear problems
Information flows in one direction
The outputs of one layer act as inputs to the next layer

XOR Example

XOR Example
Inputs: 0, 1
H1: Net = 0(4.83) + 1(-4.83) 2.82 = -7.65
Output = 1 / (1 + ) = 4.758 x
H2: Net = 0(-4.63) + 1(4.6) 2.74 = 1.86
Output = 1 / (1 + ) = 0.8652
O: Net = 4.758 x (5.73) + 0.8652(5.83) 2.86 =
2.187
Output = 1 / (1 + ) = 0.8991 1

Backpropagation
The Backpropagation neural network is a multilayered,
feedforward neural network and is by far the most extensively
used.
It is also considered one of the simplest and most general
methods used for supervised training of multilayered neural
networks.
Backpropagation works by approximating the non-linear
relationship between the input and the output by adjusting the
weight values internally.

Backpropagation

Note: Backpropagation neural networks can have more than one hidden layer.

Backpropagation
Two steps of operations:
1. Feedforward step
- An input pattern is applied to the input layer and its effect
propagates, layer by layer, through the network until an
output is produced.
- The network's actual output value is then compared to the
expected output, and an error signal is computed for each
of the output nodes.

Error

2. Backpropagation Step
- output error signals are transmitted backwards from the
output layer to each node in the hidden layer that
immediately contributed to the output layer.

Backpropagation
Total

Error

Squared Error Function and sum them to get the total


error:

Example: Backpropagation
Input

i1=0.05
i2=0.10
Desired Output
o1=0.01
o2=0.99
Activation
Function
Sigmoid Function

Example: Backpropagation
(feedforward step)

Solution:

Example: Backpropagation
(feedforward step)
Cont.

Solution:

Example: Backpropagation
(feedforward step)
Cont.

Solution:

Example: Backpropagation
(backpropagation step)

Example: Backpropagation
Total error change with respect to the actual output
(backpropagation step)

Example: Backpropagation
(backpropagation step)

Example: Backpropagation
(backpropagation step)

???

Example: Backpropagation
(backpropagation step)

Example: Backpropagation
(backpropagation step)

= 0.40
0.40 = 0.055399425

Example: Backpropagation
(backpropagation step)

)=0.241300709

???

Anda mungkin juga menyukai