9 October 2017
untuk NLP
9 October 2017
computational linguistics for several years now, but 2015
seems like the year when the full force of the tsunami hit the
major Natural Language Processing (NLP) conferences.”
-Dr. Christopher D. Manning, Dec 2015
9 October 2017
yang lainnya, disarankan untuk kita mempelajari beberapa
topik berikut:
• Gradient Descent/Ascent
3
Referensi/Bacaan
• Andrej Karpathy’ Blog
9 October 2017
• http://karpathy.github.io/2015/05/21/rnn-effectiveness/
• Colah’s Blog
• http://karpathy.github.io/2015/05/21/rnn-effectiveness/
4
Deep Learning vs Machine Learning
• Deep Learning adalah bagian dari isu Machine Learning
9 October 2017
• Machine Learning adalah bagian dari isu Artificial Intelligence
Artificial Intelligence
5
Machine Learning : Dirancang manusia
9 October 2017
Predicted label: positive
if contains(‘menarik’):
return positive
...
6
“Buku ini sangat menarik dan penuh manfaat”
Machine Learning : Dirancang manusia
9 October 2017
ouput.
Predicted label: positive
Feature Engineering!
Hand-designed Feature Extractor:
Contoh: Menggunakan TF-IDF, Representation
informasi syntax dengan POS Tagger, dsb.
7
“Buku ini sangat menarik dan penuh manfaat”
Machine Learning : Dirancang manusia
9 October 2017
Predicted label: positive
8
“Buku ini sangat menarik dan penuh manfaat”
Machine Learning : Dirancang manusia
9 October 2017
Predicted label: positive
Fitur Kompleks/High-Level
9
“Buku ini sangat menarik dan penuh manfaat”
Sejarah
9 October 2017
• Perceptron terdiri dari 3 layer: Sensory, Association, dan
Response.
Rosenblatt, Frank. “The perceptron: a probabilistic model for information storage and
organization in the brain.” Psychological review 65.6 (1958): 386.
The Perceptron (Rosenblatt, 1958)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Activation function adalah fungsi non-linier. Dalam kasus perceptron
Rosenblatt, activation function adalah operasi thresholding biasa (step
function).
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
13
https://www.datarobot.com/blog/a-primer-on-deep-learning/
The Fathers of Deep Learning(?)
• Di tahun 2006, ketiga orang tersebut mengembangkan cara
untuk memanfaatkan dan mengatasi masalah training
9 October 2017
terhadap deep neural networks.
• Sebelumnya, banyak orang yang sudah menyerah terkait
manfaat dari neural network, dan cara training-nya.
https://www.datarobot.com/blog/a-primer-on-deep-learning/
The Fathers of Deep Learning(?)
• Automated learning of data representations and features is
9 October 2017
what the hype is all about!
https://www.datarobot.com/blog/a-primer-on-deep-learning/
Mengapa sebelumnya “deep learning” tidak sukses?
9 October 2017
ditemukan sebelumnya.
• Bahkan Long-Short Term Memory (LSTM) network, yang
saat ini ramai digunakan di bidang NLP, ditemukan tahun
1997 oleh Hochreiter & Schmidhuber.
16
Mengapa sebelumnya “deep learning” tidak sukses?
9 October 2017
learning.html
9 October 2017
learning yang bekerja secara praktikal.
The success of Deep Learning hinges on a very fortunate fact: that
well-tuned and carefully-initialized stochastic gradient descent
(SGD) can train LDNNs on problems that occur in practice. It is not a
And yet, somehow, SGD seems to be very good at training those large
deep neural networks on the tasks that we care about. The problem
of training neural networks is NP-hard, and in fact there exists a
family of datasets such that the problem of finding the best neural
network with three hidden units is NP-hard. And yet, SGD just solves
it in practice. 18
9 October 2017
Networks (ANNs)
• Dan Neural Networks sebenarnya adalah sebuah
Tumpukan Fungsi Matematika
9 October 2017
Ekspresikan permasalahan ke dalam sebuah fungsi F (yang
mempunyai parameter θ), lalu secara otomatis cari
parameter θ sehingga fungsi F tepat mengeluarkan output
yang diinginkan.
Y = F(X; θ)
21
X: “Buku ini sangat menarik dan penuh manfaat”
Apa itu Deep Learning?
Untuk Deep Learning, fungsi tersebut biasanya terdiri dari
9 October 2017
tumpukan banyak fungsi yang biasanya serupa.
Y F ( F ( F ( X ; 3 ); 2 );1 )
F(X; θ3)
Gambar ini sering disebut Tumpukan Fungsi ini
dengan istilah sering disebut dengan
F(X; θ2)
Computational Graph Tumpukan Layer
F(X; θ1)
22
“Buku ini sangat menarik dan penuh manfaat”
Apa itu Deep Learning?
• Layer yang paling terkenal/umum adalah Fully-Connected
9 October 2017
Layer.
Y F ( X ) f (W . X b)
• “weighted sum of its inputs, followed by a non-linear function”
W R M N
M unit X R
N f wi xi b
i
N unit
b RM f
w x b
i i
f (W . X b) i 23
X Non-linearity
Mengapa perlu “Deep”?
• Humans organize their ideas and concepts hierarchically
9 October 2017
• Humans first learn simpler concepts and then compose
them to represent more abstract ones
• Engineers break-up solutions into multiple levels of
abstraction and processing
24
Y. Bengio, Deep Learning, MLSS 2015, Austin, Texas, Jan 2014
(Bengio & Delalleau 2011)
Neural Networks
Y f (W1. X b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
X Y f (W1. X b1 )
25
Neural Networks
Y f (W1.( f (W1. X b2 )) b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Y f (W2 .H1 b2 )
X H1 f (W1. X b1 )
26
Neural Networks
Y f (W1.( f (W2 .( f (W3 . X b3 )) b2 )) b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Y f (W3 .H 2 b3 )
X
H1 f (W1. X b1 ) H 2 f (W2 .H1 b2 )
27
Alasan matematis mengapa harus “deep”?
9 October 2017
units can approximate any continuous function arbitrarily
well.
28
Alasan matematis mengapa harus “deep”?
Akan tetapi ...
9 October 2017
• “Enough units” can be a very large number. There are functions
representable with a small, but deep network that would
require exponentially many units with a single layer.
• The proof only says that a shallow network exists, it does not say
9 October 2017
w x b
i
i i
9 October 2017
• Secara random, kita inisialisasi semua parameter W1, b1,
W2, b2, W3, b3
9 October 2017
randomly
W(2)
W(1)
32
Buku ini sangat baik dan mendidik
Training Neural Networks
• Initialize trainable parameters
9 October 2017
randomly
• Loop: x = 1 → #epoch:
• Pick a training example
W(2)
W(1)
33
x Buku ini sangat baik dan mendidik
Training Neural Networks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
• Loop: x = 1 → #epoch:
Pred. Label • Pick a training example
y 0.3 0.7
(Output) • Compute output by doing feed-
h2
W(2)
h1
W(1)
34
x Buku ini sangat baik dan mendidik
Training Neural Networks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y • Pick a training example
Pred. Label y 0.3 0.7
(Output) • Compute output by doing feed-
W(2)
h1
W(1)
35
x Buku ini sangat baik dan mendidik
Training Neural Networks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y • Pick a training example
Pred. Label y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y • Pick a training example
Pred. Label y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y • Pick a training example
Pred. Label y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
n(t )
n n
9 October 2017
Digunakan untuk mencari konfigurasi parameter-parameter
sehingga cost function menjadi optimal, dalam hal ini
mencapai local minimum.
9 October 2017
Misal, kita pilih x dimulai dari x=2.0:
Local minimum
42
Gradient Descent (GD)
Algorithm:
9 October 2017
xt 1 xt t f ' ( xt )
If f ' ( xt 1 ) then return " converged on critical po int"
If xt xt 1 then return " converged on x value"
Tips: pilih αt yang tidak terlalu kecil, juga tidak terlalu besar.
Gradient Descent (GD)
9 October 2017
while not converged :
( t 1)
(t )
t f ( (t ) )
2(t )
2 2
n(t 1) n(t ) t f ( (t ) )
n(t ) 44
Gradient Descent (GD)
9 October 2017
n(t )
n n
9 October 2017
tidak).
P( y 0 | x; ) 1 P( y 1 | x; )
46
1
( z)
1 ez
Logistic Regression
9 October 2017
i 1
P( y 0 | x; ) 1 P( y 1 | x; )
47
Dengan fungsi sigmoid sebagai activation function
Logistic Regression
9 October 2017
Bagaimana bila belum ditentukan ?
Kita bisa estimasi parameter θ dengan memanfaatkan
training data {(x(1), y(1)), (x(2), y(2)), …, (y(n), y(n))} yang
x1
θ1
x2 θ2
θ3
x3
θ0
48
+1
Logistic Regression
Learning
Misal,
n
h ( x) ( 0 1 x1 ... n xn ) ( 0 i xi )
9 October 2017
i 1
i 1
Logistic Regression
Learning
9 October 2017
maksimal).
l ( ) log L( )
m
J ( ) y (i ) log h ( x (i ) ) (1 y (i ) ) log (1 h ( x (i ) ))
i 1
50
Logistic Regression
Learning
9 October 2017
menurunkan:
J ( )
i
J ( ) (h ( x) y ) x j
j
9 October 2017
inisialisasi θ1, θ2, …, θn
while not converged :
m
1 : 1 (h ( x (i ) ) y (i ) ) x1i
m
n : n (h ( x (i ) ) y (i ) ) xni
i 1
52
Logistic Regression
Learning
9 October 2017
inisialisasi θ1, θ2, …, θn
while not converged :
1 : 1 ( y (i ) h ( x (i ) )) x1i
2 : 2 ( y (i ) h ( x (i ) )) x2i
n : n ( y (i ) h ( x (i ) )) xni
53
Logistic Regression
Learning
9 October 2017
gradient dihitung dengan cara rata-rata/sum dari sebuah mini-
batch sample (misal, 32 atau 64 sample).
54
Multilayer Neural Network (Multilayer Perceptron)
9 October 2017
x1
θ1
Misal, ada 3-layer NN, dengan 3 input unit, 2 hidden unit, dan
2 output unit.
9 October 2017
x1 (1)
𝑊11
(1) (2)
𝑊21 𝑊11
9 October 2017
Dari contoh sebelumnya, ada 2 unit di output layer. Kondisi
ini biasanya digunakan untuk binary classification. Unit
pertama menghasilkan probabilitas untuk pertama, dan unit
57
Multilayer Neural Network (Multilayer Perceptron)
9 October 2017
Untuk menghitung output di hidden layer:
a1( 2 ) f ( z1( 2 ) )
a2( 2 ) f ( z 2( 2 ) ) Ini hanyalah perkalian matrix !
x1
W(1)
W (1)
W b1(1)
(1)
58
z ( 2)
W x b
(1) (1)
11
(1)
12
(1) x2 (1)
13
(1)
W x3 2
21 W 22 W
23 b
Multilayer Neural Network (Multilayer Perceptron)
9 October 2017
z ( 2 ) W (1) x b (1)
a ( 2) f ( z ( 2) )
59
Multilayer Neural Network (Multilayer Perceptron)
Learning
9 October 2017
m adalah banyaknya training examples.
nl 1 sl sl 1 (l ) 2
Regularization terms
J (W , b; x, y ) y j loghW ,b ( x j ) y j log p j
60
j j
Multilayer Neural Network (Multilayer Perceptron)
Learning
9 October 2017
m adalah banyaknya training examples.
m i 1 2 2 l 1 i 1 j 1
Regularization terms
1
J (W , b; x, y ) hW ,b ( x (i ) ) y (i )
2
2 1
2 j
hW ,b ( x j ) y j
(i ) (i )
2 61
Multilayer Neural Network (Multilayer Perceptron)
Learning
9 October 2017
inisialisasi W, b
while not converged : Bagaimana cara menghitung
gradient ??
b (l )
b (l ) J (W , b)
(l )
bi
i i
62
Multilayer Neural Network (Multilayer Perceptron)
Learning
9 October 2017
dJ(W,b;x,y) menentukan overall partial derivative dJ(W,b):
1 m
bi(l )
J (W , b )
m i 1 bi(l )
J (W , b; x (i )
, y (i )
)
63
9 October 2017
1. Jalankan proses feed-forward
2. Untuk setiap output unit i pada layer nl (output layer)
i ( nl ) J (W , b; x, y ) (ai( nl ) yi ) f ' ( zi( nl ) )
( nl )
zi
64
j i
( l ) ( l 1)
J (W , b; x , y ) a J (W , b; x , y ) ( l 1)
Wij
(l )
bi(l )
i
Multilayer Neural Network (Multilayer Perceptron)
Learning (3)
(2) 𝑧1
Back-Propagation 𝑊11 (3)
𝑎1
(2)
𝑊21
9 October 2017
(2)
Contoh hitung gradient di output ... 𝑊12 (3)
𝑧2
(2) (3)
𝑊22 𝑎2
1 ( 3) 2
1 ( 3)
J (W , b; x, y ) a1 y1 a2 y2
2
(2)
𝑏1
(2)
𝑏2
a1(3) f z1(3)
f ' z ( 3)
J a1(3) z1(3)
J (W , b; x, y ) (3) (3)
z1(3) z1(3)
1
W12( 2)
a1 z1 W12( 2 )
z( 3)
1 W ( 2)
11 a( 2)
1 W ( 2)
12 a ( 2)
2 b
( 2)
1
a1(3) y1 f ' ( z1(3) ) a2( 2 )
z1(3) 65
a ( 2)
W12( 2 )
2
Sensitivity – Jacobian Matrix
The Jacobian J is the matrix of partial derivatives of the
9 October 2017
network output vector y with respect to the input vector x.
. . . .
. . . . yk
67
Recurrent Neural Networks
O1 O2 O3 O4 O5
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
h1 h2 h3 h4 h5
X1 X2 X3 X4 X5
68
One of the famous Deep Learning Architectures in the NLP community
Recurrent Neural Networks (RNNs)
9 October 2017
• Menghasilkan Sequences
• …
• Intinya … ada Sequences
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Sequence Input/Output
(e.g. Machine Translation) 70
Sequence Output
(e.g. Image Captioning)
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Recurrent Neural Networks (RNNs)
9 October 2017
RNNs combine the input vector with their state vector with a fixed
Misal, ada I input unit, K output unit, dan H hidden unit (state).
9 October 2017
ht R H 1 xt R I 1 yt R K 1
W (hy ) W ( xh ) R H I W ( hh ) R H H
X1 X2
yt W ( hy ) st 72
h0 0
Recurrent Neural Networks (RNNs)
9 October 2017
influence on the output layer.
( y)
K ( hy ) ( y ) H ( hh ) ( h )
(h)
i ,t Wi , j j ,t Wi ,n n ,t 1 f ' hi ,t
j 1
9 October 2017
get the derivatives with respect to the
network weights.
W (hy ) L T
Wi , j
( hh )
t 1
W (hh )
L T
W ( xh ) Wi , j
( hy )
t 1
j ,t si ,t
( y)
L T
X1 X2 Wi , j
( xh )
t 1
j ,t xi ,t
(h)
74
Recurrent Neural Networks (RNNs)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
75
9 October 2017
pada step k mempengaruhi cost
pada step-step setelahnya (t > k)
Lt t
Lt ht hk
W ( hh )
( hh )
s
76
hk W ( xh )
xt W ( hh )
st 1
t 1
W ( hh )
W ( hh )
Recurrent Neural Networks (RNNs)
9 October 2017
caused by the explosion of the long term components, which can grow
exponentially more then short term ones.”
And “The vanishing gradients problem refers to the opposite behaviour, when
Kok bisa terjadi? Coba lihat salah satu temporal component dari sebelumnya:
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient
descent is difficult. IEEE Transactions on Neural Networks
Sequential Jacobian biasa
Recurrent Neural Networks (RNNs) digunakan untuk analisis
penggunaan konteks pada
Vanishing & Exploding Gradient Problems RNNs.
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
78
9 October 2017
2) Definisikan arsitektur baru di dalam RNN Cell!, seperti Long-Short Term
Memory (LSTM) (Hochreiter & Schmidhuber, 1997).
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
80
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
81
9 October 2017
2. These blocks can be thought of as a differentiable version
of the memory chips in a digital computer.
3. Each block contains:
82
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
83
Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation,9(8):1735
1780, 1997
Long-Short Term Memory (LSTM)
9 October 2017
information over long periods of time,
thereby mitigating the vanishing
gradient problem.
84
Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation,9(8):1735
1780, 1997
Long-Short Term Memory (LSTM)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
85
Komputasi di LSTM
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
86
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
87
Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation,9(8):1735
1780, 1997
Example: RNNs for POS Tagger
(Zennaki, 2015)
9 October 2017
PRP VBD TO JJ NN
88
I went to west java
LSTM + CRF for Semantic Role Labeling
(Zhou and Xu, ACL 2015)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
89
Attention Mechanism
9 October 2017
a source sentence into a fixed-length vector. This may make it difficult
for the neural network to cope with long sentences, especially those
that are longer than the sentences in the training corpus.
90
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Sutkever, Ilya et al., Sequence to Sequence Learning with Neural
Networks, NIPS 2014.
91
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Attention Mechanism
9 October 2017
Sutkever, Ilya et al., Sequence
to Sequence Learning with
Neural Networks, NIPS 2014.
9 October 2017
• Each time the proposed model generates a word in a translation, it
(soft-)searches for a set of positions in a source sentence where the
most relevant information is concentrated. The model then predicts
a target word based on the context vectors associated with these
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
94
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Neural Machine Translation by Jointly
Learning to Align and Translate, arXiv:1409.0473, 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
95
Cell merepresentasikan bobot attention, terkait translation.
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Neural Machine Translation by Jointly
Learning to Align and Translate, arXiv:1409.0473, 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
96
Colin Raffel, Daniel P. W. Ellis, F EED -F ORWARD NETWORKS WITH ATTENTION CAN
S OLVE S OME L ONG-T ERM MEMORY P ROBLEMS, Workshop track - ICLR 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
97
Yang, Zichao, et al., Hierarchical Attention Networks for Document Classification, NAACL 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
98
Yang, Zichao, et al., Hierarchical Attention Networks for Document Classification, NAACL 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
99
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
100
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Xu, Kelvin, et al. « Show, Attend and Tell: Neural Image Caption
Generation with Visual Attention (2016).
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
101
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Xu, Kelvin, et al. « Show, Attend and Tell: Neural Image Caption
Generation with Visual Attention (2016).
Attention Mechanism
9 October 2017
Attention Model digunkan
Sebagai contoh:
untuk menghubungkan
• Premis: “A wedding party taking pictures“ kata-kata di premis dan
102
Tim Rocktaschel et al., REASONING ABOUT ENTAILMENT WITH NEURAL ATTENTION, ICLR 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
103
Tim Rocktaschel et al., REASONING ABOUT ENTAILMENT WITH NEURAL ATTENTION, ICLR 2016
Recursive Neural Networks
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
104
R. Socher, C. Lin, A. Y. Ng, and C.D. Manning. 2011a. Parsing Natural Scenes and Natural Language with
Recursive Neural Networks. In ICML
Recursive Neural Networks
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
p1 g W .b; c bias 105
Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP
2013
Recursive Neural Networks
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
106
Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP
2013
Convolutional Neural Networks (CNNs) for Sentence Classification
(Kim, EMNLP 2014)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
107
Recursive Neural Network for SMT Decoding.
(Liu et al., EMNLP 2014)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
108