Identification
System Description
1)Hammerstein models
2)Wiener models
Identification methods
Recursive and non-recursive
identification
System
identification can be done in a
recursive or
non-recursive manner. Recursive identification refers
to identification algorithms that use information from
previous steps for identification, whether nonrecursive methods use all available data at once.
Recursive algorithms can therefore be used on-line
and can compensate for changes in the dynamics of
the process. In chemical processes for instance,
where the dynamics of the system can change
significantly based on the temperature, the use of
recursive identification can beery useful.
Nonlinear System
identification
Nonlinear system identification is the task of
determining or estimating a system's input-output
relationship f based on (possibly noisy) output
measurements,
y (t ) f [ x(t )] e(t )
where the error e(t) is noise, disturbance, or
another source of error in the measurement
process.
10
Structure Selection
NARX Model
NARMAX Model
Volterra Series
Polynomial Model
25
Optimization Algorithms
Steepest descent
Conjugate gradient
Levenberg Marquardt
Gauss Newton
26
Why NN?
Its universal approximation property
Steps Involved
First, the nonlinear model structure is to
be chosen
Training algorithm that modifies the
weights of the neural network to minimize
the error criterion
27
w0
x1
w1
x2
v wa xaT
v
v
-1
w2
x(t) R(2)
y v
Neural
Inputs Synaptic operation
Somatic Operation
y R1
xn
Neural
inputs
w01
w11
w12
w02
yd
yn +
-
-1
w22
M
wnn
w a
yn : Neural Output
Learning rate
error
29
Highly simplified
neurons
models
of
the
biological
Pattern Recognition
System Identification
Function Approximation
Neuro-Control systems
30
Optimization Algorithm
Steepest descent
Conjugate gradient
Levenberg Marquardt
Gauss Newton
31
Layer
x1
x2
Output
Layer
y1
ym
Hidden
Layer
1 m
E ( yr , k yk ) 2
2 k 1
Let us derive the parametric learning law (BPA). First consider koutput
layer
,j
weights
. By using the chain rule, we obtain,
E
E yk
wk , j yk wk , j
wk , j
y k
zj
wk , j
E
( y r , k y k )
wk , j
E
( yr , k yk ) z j
wk , j
E
1
wk2, j
wk , j
35
w j , i k 1 yk z j w j ,i
w j , i
E
w j , i
E
( y r , k y k )
y k
z j
w j , i
z j a j
a j w j , i
yk
wk , j
z j
zxi
36
It would be difficult to use the BP algorithm as an online learning mechanism, due to several reasons.
First, it is easy to to stuck at local minima because of
the highly nonlinear relationship in the parametric
space arising from the inclusion of the hidden layer.
Second, the computation is time-demanding,
especially when the number of hidden layers is large.
Third, it is discrete in nature, thus is sensitive to stepsize(learning gain ). Above all, BP algorithm belongs
to the category of supervised learning
37
38
Hm
-
r (t )
Reference
Input
y (t )
u (t )
G
Output
NNC
Plant
2. Indirect NN control: learns the plant parameters
which are used to construct the controller. It is difficult
to analyze the closed-loop stability of indirect NNC.
Estimation Model
y (t )
+
y (t )
u (t )
G
Output
NNC
Plant
40
NNs
41
42
Recurrent Network
44
46
47
48
49
The NNW can be trained from the inputoutput temporal data of the
plant by dynamic back propagation method. As mentioned before,
the training data can be generated experimentally from the plant or
from simulation result if mathematical plant model is available. If
the plant parameters vary, the NNW model generated by offline
training method is not valid. In such a case, online training of NNW
with adaptive weights is essential. It should be mentioned here that
the structure of the NNW will depend on the nature of the dynamical
system which is required to be emulated.
50
51
52
53
54
NN identification model is first generated to emulate forward model of the plant. This model is then placed in series 55
with the
NN controller (instead of actual plant) to track ref. model. The tuning of NN controller is now convenient thro NN model.
56
Fig. 11. Training of neural network for emulation of actual controller.
Nonlinear ARMA(NARMAX)
ARMAX model:
ny
nu
ne
i 1
i 1
i 1
y (k ) yi y (k i ) ui u (k i ) ei e(k i ) e(k )
NARMAX(when
outputs assumed to be error free:
57
NARX)
ny
58
i 1
Function Approximation by
ANN
The two main characteristics associated with MLP are the universal
approximation ability and back propagation learning.
There are three milestones in ANN history.
(1) In 1950s, the single layer perceptrons had been proposed to
mimic the nerve system. However the linear feature made it
difficult to generalize for nonlinear systems. The MLP were
proposed but it was unclear how to carry out parametric learning
in a highly nonlinear structure.
(2) The learning problem was solved in 1986 by the back-propagation
learning algorithm, presented in well known book Parallel
Distributed Processing by Rumelhart, Hinton and Williams.
(3) In 1991, three proofs were given to show the function
approximation property with MLP. The 3rd milestone is particular
importance to control researchers, as they can use ANN to
59 about the
address any nonlinear problems without worrying
60
61
62
Block diagram
representation of
nonlinear system
identification and control
using
artificial neural networks
63
u (k )
Nonlinear Plant
y p (k )
TDL
TDL
+
Neural Network
y p (k 1)
z-1
64
ei (k )
r (k )
ym (k )
y p (k 1)
y p (k )
z-1
+
TDL
TDL
Neural
Network
Controller
u (k ) Nonlinear
y p (k )
Plant
TDL
TDL
65
ei (k )
ec (k )
DC Motor Model
where
p (t)
Vt(t)
ia(t)
TL(t)
J
K
D
Ra
La
rotor speed
terminal voltage
armature current
load torque
rotor inertia
torque & back emf constant
damping constant
armature resistance
armature inductance
(p)
67
Identification of DC motor
model
The speed equation can be manipulated to
the form
Vt(k) = g [p(k+1) , p(k) , p(k-1)]
where the function g[.] is given by
g[p(k+1), p(k), p(k-1)] = {p(k+1) -
p(k)
2
-
(k-1)
[sign(
(k)
)]
p
p An ANN is
p (k)
and is assumed
to be unknown.
trained
emulate (k)
the unknown
function g[.]
- to
[sign(
)] 2(k-1)}/
p
69
70
ei (k 1)
p ( k 2)
Vt (k 1)
Neural network p (k 1)
z
Identifier
z-1
z-1
Vt (k )
-1
DC Motor
p (k )
71
r (k )
+
ei (k 1)
p (k 1)
z-1
ANN
Controller
Vt (k 1)
p ( k 2)
z-1
ANN
p (k 1)
Identifier
z-1
ec (k )
z-1
m (k )
Vt (k ) DC Motor
T
m
p (k )
z-1
73
73
Tracking performance of a
sinusoidal reference track
74
74
B. Subudhi and S.S. Ge, Sliding mode Control and observer based slip ratio control of Electric
and Hybrid Electric Vehicles, IEEE Trans. on Intelligent Transportation System, vol.13, no.4,
75
pp.1617-1626,2012
76
u (k )
Nonlinear Plant
TDL
TDL
ei (k )
y p (k )
+
Neural Network
y p (k 1)
z-1
77
77
r (k )
ym (k )
y p (k 1)
y p (k )
z-1
+
TDL
TDL
Neural
Network
Controller
u (k )
TDL
Nonlinear
Plant
y p (k )
ei (k )
ec (k )
TDL
78
78
Case Study:
Application to a DC motor problem
ANN can be trained to emulate the unknown
nonlinear plant dynamics by presenting a suitable set
of input/output patterns generated by the plant. ANN
based identification and control topology using MRAC
platform for trajectory control of a DC motor is
presented.
79
DC Motor Model
The DC motor dynamics are given by
the following two equations:
80
where
p (t)
Vt(t)
ia(t)
TL(t)
J
K
D
Ra
La
rotor speed
terminal voltage
armature current
load torque
rotor inertia
torque & back emf constant
damping constant
armature resistance
armature inductance
(p)
81
82
mT
The matrix
corresponds to the
reference
model coefficients [0.6 0.2]
83
THANK YOU
84