Anda di halaman 1dari 11

TUGAS KECERDASAN BUATAN

Backpropagation, Perceptron and Self-Organizing Map (SOM)

Oleh
YUSTIAN MANTJORO
7410040526

JURUSAN TEKNIK INFORMATIKA


POLITEKNIK ELEKTRONIKA NEGERI SURABAYA
INSTITUT TEKNOLOGI SEPULUH NOPEMBER
1. Proses pelatihan struktur JST 2-4-1
A. Umpan Maju (feedforward)

UNIT hj
INPUT
xi vij h1
wjk
x1 x1 UNIT POLA
yk OUTPUT TARGET
h2
y1 O1 T1

h3
x2 x2

h4

• Hitung keluaran di unit hidden layer (hj)

h_in   x v
Perkalian masing-masing unit di input layer (xi) dengan

 penimbang (vij) dan hasil perkaliannya dijumlahkan.

h_in   x v      


h_in   x v       


h_in   x v      


h_in   x v      

1
h  $%h_in! & 
1   _ '
Perhitungan dari persamaan fungsi pengaktip sigmoid
unipolar untuk masing-masing unit di hidden layer.

1
h 
1   _ 
1
h 
1   _ 
1
h 
1   _ 
1
h 
1   _ 

• Hitung keluaran di unit output layer (yk)

y_in   h w
Perkalian masing-masing unit di input layer (hj) dengan
! penimbang (wjk) dan hasil perkaliannya dijumlahkan.

Karena jaringan hanya memiliki satu unit output y, maka :

y_in   h w  h w  h w   h w  h w


!

1
y  $%(_)*+ & 
1   #_ ,
Perhitungan dari persamaan fungsi pengaktip sigmoid
unipolar untuk masing-masing unit di output layer.

1
y 
1   #_ 
B. Umpan Balik (backpropagation)

δj
UNIT hj
INPUT
xi ∆vji h1

x1 ∆wkj UNIT UNIT


x1 δk
yk OUTPUT TARGET
h2
y1 O1 T1

h3
x2 x2

h4

• Hitung faktor delta (δ) di unit output layer (yk).

Masing-masing unit di output layer (yk)


menerima unit target (tk) sesuai dengan
unit di input layer saat training dan dihitung
δk = (tk - yk) f’(y_ink) = (tk - yk) yk(1 - yk)
galatnya. Karena turunan pertama dari
fungsi pengaktip di output berupa fungsi
sigmoid unipolar maka f’(y_ink) = yk(1 – yk)
Karena jaringan hanya memiliki satu buah unit output, maka :

δ1 = (t1 – y1) y1(1 – y1)

∆wkj = α ∙ δk ∙ hj Menghitung perbaikan penimbang (Wjk).

∆w11 = α ∙ δ1 ∙ h1

∆w12 = α ∙ δ1 ∙ h2

∆w13 = α ∙ δ1 ∙ h3

∆w14 = α ∙ δ1 ∙ h4
• Hitung faktor delta (δ) unit di hidden layer.

δ_in   δ w
Masing-masing penimbang yang menghubungkan ke unit


output layer (yk) dengan unit hidden layer (hj) dikalikan
delta (δk) dan hasil perkaliannya dijumlahkan.

Karena jaringan hanya memiliki satu buah unit output, maka :

δ_in  δ w

δ_in  δ w

δ_in  δ w

δ_in  δ w

Kemudian dikalikan dengan turunan pertama


dari fungsi pengaktip di hidden layer untuk
δj = δ_inj f’(h_inj) = δ_inj hj(1 – hj) menghitung galatnya (δj). Karena turunan
pertama dari fungsi pengaktip di hidden layer
berupa sigmoid unipolar maka f’(h_inj) = hj(1 - hj)
δ1 = δ_in1 ∙ h1(1 – h1)

δ2 = δ_in2 ∙ h2(1 – h2)

δ3 = δ_in3 ∙ h3(1 – h3)

δ4 = δ_in4 ∙ h4(1 – h4)

∆vij = α ∙ δj ∙ xi Menghitung perbaikan penimbang (vij).

h1 h2 h3 h4
x1 ∆v11 = α∙δ1∙x1 ∆v12 = α∙δ2∙x1 ∆v13 = α∙δ3∙x1 ∆v14 = α∙δ4∙x1
x2 ∆v21 = α∙δ1∙x2 ∆v22 = α∙δ2∙x2 ∆v23 = α∙δ3∙x2 ∆v24 = α∙δ4∙x2
C. Perbaikan penimbang

Perbaikan penimbang yang menghubungkan masing-masing


wjk(baru) = wjk(lama) + ∆wjk
unit di hidden layer (hj) dengan unit output layer (yk).
w11(baru) = w11(lama) + ∆w11

w21(baru) = w21(lama) + ∆w21

w31(baru) = w31(lama) + ∆w31

w41(baru) = w41(lama) + ∆w41

Perbaikan penimbang yang menghubungkan masing-


vij(baru) = vij(lama) + ∆vij
masing unit input layer (xi) dengan unit hidden layer (hj).

h1 h2 h3 h4
x1 v11(baru) = v11(lama) + ∆v11 v12(baru) = v12(lama) + ∆v12 v13(baru) = v13(lama) + ∆v13 v14(baru) = v14(lama) + ∆v14
x2 v21(baru) = v21(lama) + ∆v21 v22(baru) = v22(lama) + ∆v22 v23(baru) = v23(lama) + ∆v23 v24(baru) = v24(lama) + ∆v24
2. Perceptron learning algorithm and using the unipolar sigmoidal activation function to train the
five inputs AND gate.

Define : As example I using input from my NRP (26) 2 last digit and transform it to binary digit
(11010). So, it will never be the same with the other.

->1 input (x1) = 0


->2 input (x2) = 1
->3 input (x3) = 0
->4 input (x4) = 1
->5 input (x5) = 1
->target (t) = 0 (target for AND logic with those inputs)

UNIT
INPUT
xi

0 x1
wij

x2 UNIT UNIT
1
yj OUTPUT TARGET

x3 y O 0
0

x4
1

x5
1
1

BIAS
1) First iteration :

weight table from input layer to output layer

y
x1 0
x2 -0.5
x3 -0.1
x4 0.1
x5 0.5
1 1

y_in  3.!   x w


y_in1 = w01 + x1w11 + x2w21 + x3w31 + x4w41 + x5w51

= 1 + 0(0) + 1(-0.5) + 0(-0.1) + 1(0.1) + 1(0.5)

= 0.1

1
y  $%(_)*! & 
1   #_ '

1
y   0.52
1   ..

wjk(baru) = wjk(lama) + ∆wjk

w11(baru) = 0 + 0.5(0-0)0 = 0

w21(baru) = -0.5 + 0.5(0-(-0.5))1 = -0.25

w31(baru) = -0.1+ 0.5(0-0)0 = -0.1

w41(baru) = 0.1+ 0.5(0-0.1)1 = 0.05

w51(baru) = 0 .5+ 0.5(0-0.5)1 = -0.25


2) Second iteration :

weight table after first iteration from input layer to output layer

y
x1 0
x2 -0.25
x3 -0.1
x4 0.05
x5 -0.25
1 1

y_in  3.!   x w


y_in1 = w01 + x1w11 + x2w21 + x3w31 + x4w41 + x5w51

= 1 + 0(0) + 1(-0.25) + 0(-0.1) + 1(0.05) + 1(-0.25)

= 0.55

1
y  $%(_)*! & 
1   #_ '

1
y   0.63
1   ..44
4. Describe the concept of SOFM, and how to recognize alphabetic character S in the 5x7
template matching problem.

Character is represented by 5x7 grid of pixels. The graphical printing character extended
ASCII character is done using grayscale output for each pixel. The blackened boxes
represent value one and empty boxes are represented by a zero.

0 1 1 1 0
1 0 0 0 1
1 0 0 0 0
0 1 1 1 0
→ 0 0 0 0 1
1 0 0 0 1
0 1 1 1 0

SOFM Algorithm for Recognize Letter ‘S’


The alphabet S

01110
10001
10000
01110
00001
10001
01110

The letter S is put in the input vector (xv) . The input is in serialized rows so that all
entries appear on one line
Letter S 01110 1 0 0 01 10000 01110 00001 10001 01110
Many characters are presented to the Kohonen map and the ouput response is
noted. The Kohonen map goes through the cycles and learns the input patterns.
The weight vectors (wv) develop a pattern. The weight vector tend to become
aligned with the input vectors. So after a while the weight vector for the input will
resamble the input pattern that is being categorized. The letter S have 35 weight
vector (wv). Use Euclidean distance formula to find similarity between the
input vector and the map's weight vector.

7%8 , 38 &  :%8 ; 38 &




Track the weight vector that produces the smallest distance, this is the best
matching unit, Winner Neuron. Training neuron and change the weight
vector in the neighbourhood of Winner Neuron by pulling them closer to the
input vector.

wv(t + 1) = wv(t) + Θ(t)α(t)(d


d(t) - wv(t))

Increment t and repeat from 2 while t < λ

Anda mungkin juga menyukai