Anda di halaman 1dari 84

Nonlinear System

Identification

System Description

Identification of nonlinear systems using black-box methods


has gained high attention in the last decades.
This is mainly because the theory of linear system
identification is already well developed due to the relative
simplicity of linear models, but also, because in several cases
linear system identification might not give satisfactory results
when the system has highly nonlinear dynamics.
In some nonlinear systems though, a linear model can be
sufficient for the purpose of control. In other cases where the
dynamics of the system varies a lot for different operating
points, this may not be possible.
An examples where linear system identification fails is in flight
control, where gain scheduling is often used to overcome the
problem

Nonlinear model structures


Most common model structures that
are used for nonlinear system
identification
Discrete-time
difference
NARX
and NARMAX nonlinear
are the nonlinear
equivalent to
theequations
ARX, Auto Regressive with eXogeneous inputs,
and ARMAX, Auto Regressive Moving Average with
eXogeneous inputs, model structures.
The ARMAX model structure is as follows

The ARMAX model structure is very general, any


linear
finite-order
system
with
stationary
disturbance and rational spectral density can be
described using the ARMAX model structure.

By extending the ARMAX by introducing


nonlinearities in the model structure it is
generalized to a nonlinear ARMAX structure,
denotes NARMAX.
For the single-input single-output system, the
structure is
F(.) is an arbitrarily nonlinear function. This nonlinear function can for
instance
be chosen as a polynomial of the arguments. ny, nu and ne are the
orders for each signal
and d is time delay of the input.

Block-oriented nonlinear models

In order to describe nonlinear systems, block-oriented nonlinear


models can sometimes be used. Such models divide the system
into linear and nonlinear subsystems, i.e. blocks.
The most common block-oriented models include

1)Hammerstein models

Block diagram of the Hammerstein model


structure

are in many cases a good approximation of nonlinear systems. The


model structure consists of a static nonlinearity followed by a linear
transfer function. The input and the output are measurable, but the
intermediate signal is not accessible. The parameters of the model
are then identified for instance by separation of the linear and
nonlinear part and iterative identification.
Examples where Hammerstein models have been proven to be useful

2)Wiener models

are similar to Hammerstein models. This model structure consist of a


linear transfer function followed by the static nonlinearity on the
output of the system. The output nonlinearity might for instance
represent sensor nonlinearity but also other nonlinearities in the
system that take effect at the output. This model structure has also
proven useful through several practical applications such as chemical
processes. It is extensively used in biomedical applications

3)Wiener-Hammerstein models are combinations of


the Hammerstein and the Wienermodel, where the
system is described by two linear models enclosing
a static nonlinear function.
4)Hammerstein-Wiener models represent another
combinatorial structure of the Wiener and
Hammerstein model. In the Hammerstein-Wiener
structure, the nonlinearities are represented on the
input and the output and the dynamics can be
described by a linear transfer function. The
identification algorithm used for the model structure
is performed using the prediction error method.

Block diagram of the Hammerstein-Wiener model

Identification methods
Recursive and non-recursive
identification
System
identification can be done in a

recursive or
non-recursive manner. Recursive identification refers
to identification algorithms that use information from
previous steps for identification, whether nonrecursive methods use all available data at once.
Recursive algorithms can therefore be used on-line
and can compensate for changes in the dynamics of
the process. In chemical processes for instance,
where the dynamics of the system can change
significantly based on the temperature, the use of
recursive identification can beery useful.

Nonlinear System
identification
Nonlinear system identification is the task of
determining or estimating a system's input-output
relationship f based on (possibly noisy) output
measurements,

y (t ) f [ x(t )] e(t )
where the error e(t) is noise, disturbance, or
another source of error in the measurement
process.
10

Structure Selection
NARX Model
NARMAX Model
Volterra Series
Polynomial Model
25

Optimization Algorithms
Steepest descent
Conjugate gradient
Levenberg Marquardt
Gauss Newton

26

Neural Network based System


Identification

Why NN?
Its universal approximation property
Steps Involved
First, the nonlinear model structure is to
be chosen
Training algorithm that modifies the
weights of the neural network to minimize
the error criterion
27

Mathematical Model of the Neuron


(Bias)
x0 1

w0
x1

w1

x2

v wa xaT

v
v

-1

w2

x(t) R(2)

y v

Linear mapping operation Nonlinear mapping


(from many to one)
(from one to one)

Neural
Inputs Synaptic operation

Somatic Operation

(Linear Synaptic Operation)


28

y R1

A Simple way to Explain the


Adaptation Rule
Weight updating
x0 1
x1
x11
x12
x2
x22

xn

Neural
inputs

yd: Desired Output


w00

w01

w11
w12
w02

yd

yn +
-

-1

w22
M
wnn

w a

yn : Neural Output

Learning rate

error
29

The conventional neural models are

Highly simplified
neurons

models

of

the

biological

Considers only the linear summation of the


weighted neural inputs

Using this models, many neural morphologies, usually


referred to as multilayered feedforward neural
networks (MFNNs), have been reported in the
literature. Some potential applications are:

Pattern Recognition
System Identification
Function Approximation
Neuro-Control systems
30

Neural Network based System


Identification
Error criterion
Mean squared error between target and actual
outputs averaged over all samples

Optimization Algorithm
Steepest descent
Conjugate gradient
Levenberg Marquardt
Gauss Newton
31

Concepts of Neural Network Control


Artificial Neural Network (ANN), inspired by human
brain, has been applied to almost all fields of
engineering including control theory and applications.
As far as control is concerned, we mainly use the
universal function approximation property of an ANN.
By virtue of this property, we can convert the nonlinear
function learning process into a parametric learning
process: learn a set of unknown parameters which are
either linear or nonlinear in the ANN.
x f ( x) u
New problem
f (x)
Where
thethe
nonlinear
is unknown. To
Consider
systemfunction
the best of our knowledge, we know it is smooth, and
L2 (well
)
defined in a function space, say
, with
being a compact set. The control task is to force the
32
state x to follow up a given reference trajectory

Three Layer MLP


Numerous types of ANN have been proposed such as MLP network,
RBF network, B-spline network, Hopfield network, wavelet network,
etc.
Input
wj, i
wk , j

Layer

x1

x2

Output
Layer

y1

ym

Hidden
Layer

A 3 layer MLP with sigmoid nonlinear activation functions can


approximate a smooth nonlinear function in a compact set with
33
arbitrary precision.

Function Approximation by ANN


The two main characteristics associated with MLP are the
universal approximation ability and back propagation
learning.
There are three milestones in ANN history.
(1)In 1950s, the single layer perceptrons had been
proposed to mimic the nerve system. However the
linear feature made it difficult to generalize for
nonlinear systems. The MLP were proposed but it was
unclear how to carry out parametric learning in a
highly nonlinear structure.
(2)The learning problem was solved in 1986 by the backpropagation learning algorithm, presented in well
known book Parallel Distributed Processing by
Rumelhart, Hinton and Williams.
34

(3) In 1991, three proofs were given to show the function

Back Propagation Algorithm(BPA)


BPA is a gradient decent method aiming at minimizing a quadratic
error criterion

1 m
E ( yr , k yk ) 2
2 k 1

Let us derive the parametric learning law (BPA). First consider koutput
layer
,j
weights
. By using the chain rule, we obtain,

E
E yk

wk , j yk wk , j
wk , j

y k
zj
wk , j

E
( y r , k y k )
wk , j

E
( yr , k yk ) z j
wk , j

The convergence property can be derived as

E
1
wk2, j
wk , j

35

Back Propagation Algorithm(BPA)


w

Next consider the hidden layer jweights


. By using the chain
,i
rule, and viewing the output layer weights as tuned parameters
(fixed), then we obtain
m E y
z j
E
k

w j , i k 1 yk z j w j ,i

w j , i

E

w j , i

E
( y r , k y k )
y k

z j
w j , i

z j a j
a j w j , i

yk
wk , j
z j

zxi

36

Back Propagation Algorithm(BPA)

It would be difficult to use the BP algorithm as an online learning mechanism, due to several reasons.
First, it is easy to to stuck at local minima because of
the highly nonlinear relationship in the parametric
space arising from the inclusion of the hidden layer.
Second, the computation is time-demanding,
especially when the number of hidden layers is large.
Third, it is discrete in nature, thus is sensitive to stepsize(learning gain ). Above all, BP algorithm belongs
to the category of supervised learning
37

Neural Network Control


The NN-based control structure presents the
following advantages in comparison with the
traditional ones:
the controller design does not rely on a
mathematical model;
the drive is completely self-commissioned and
does not require any tuning procedure;
the on-line training of the NN-based speed
controller makes the drive insensitive to
parameter variations;
the transient state position tracking error is
reduced due to the presence of the NN-based

38

ANN Control System


Configuration
1. Direct NN control
Reference Model

Hm
-

r (t )

Reference
Input

y (t )

u (t )

G
Output
NNC

Plant

Direct NN control directly learns the controller parameters.


39


2. Indirect NN control: learns the plant parameters
which are used to construct the controller. It is difficult
to analyze the closed-loop stability of indirect NNC.
Estimation Model

y (t )

+
y (t )

u (t )

G
Output
NNC

Plant
40

NNs

NNs are able to emulate any unknown non-linear system by presenting


a suitable set of input-output patterns generated by the plant to be
modeled.
The NNs learning capabilities are fully utilized only if weights and
biases are on-line updated. This adaptive property of NN-based control
makes the drive system robust to load disturbances and parameter
variations.

41

NEURAL NETWORKS FOR


DYNAMICAL SYSTEM
The feed forward NNWs, discussed in the previous section,
can give only static inputoutput nonlinear (or linear)
mapping, i.e., the output remains fixed at any instant with the
fixed input at that instant.
In many applications, a NNW is required to be dynamic; that is,
it should be able to emulate a dynamic system with temporal
behavior, such as identification of a machine model, control
and estimation of flux, speed, etc. Such a network has storage
property like that of a capacitor or inductor.

42

Recurrent Network

A recurrent neural network (RNN) normally uses feedback from


the output layer to an earlier layer, and is often defined as a
feedback network.
Fig. 5(a) shows the general structure of a two-layer real-time
RNN
The feedback network with time delays ( Z 1) shown can emulate
a dynamical system. The output in this case not only depends
on the present input, but also prior inputs, thus giving temporal
behavior of the network.
If, for example, the input is a step function, the response will
reverberate in time domain until steady-state condition is
reached at the output. The network can emulate nonlinear
differential equations that are characteristics of a nonlinear
dynamical system. Of course, if TFs of the neurons are linear, it
will represent linear system. Such a network can be trained by
dynamical Back propagation algorithm,
43

Fig. 5. (a) Structure of real-time recurrent network


(RNN).

44

the desired time-domain output


from the reference dynamical
system (plant) can be used stepby-step to force the ANN (or
RNN) output to track by tuning
the weights dynamically
sample-by-sample.

Fig. 5. (b) Block diagram for45

As an example, consider a one-input one-output RNN which is desired to emulate a


series nonlinear R-L-C circuit [plant in Fig. 5(b)].
A step voltage signal x(k) is applied to the plant and the RNN simultaneously. The
current response y(k+1) of the plant is the target signal, and it is used to tune the
RNN weights. Then, the RNN will emulate the R-L-C circuit model. On the same
principle, the RNN can emulate a complex dynamical system. An example
application in power electronic system for adaptive flux estimation will be
discussed later using the EKF algorithm.

46

47

Fig. 6. Time-delayed neural network with tapped delay line.

48

Fig. 7. Neural network with time-delayed input and output.

49

Static NNW With Time-Delayed Input and Feedback


The structure of the NNW is shown in Fig. 7, where there are time
delayed inputs, as well as time-delayed outputs as feedback signals.
In this case, the NNW is required to emulate the dynamical system
given by

The NNW can be trained from the inputoutput temporal data of the
plant by dynamic back propagation method. As mentioned before,
the training data can be generated experimentally from the plant or
from simulation result if mathematical plant model is available. If
the plant parameters vary, the NNW model generated by offline
training method is not valid. In such a case, online training of NNW
with adaptive weights is essential. It should be mentioned here that
the structure of the NNW will depend on the nature of the dynamical
system which is required to be emulated.
50

Fig. 8. Training of inverse dynamic model of plant.

51

C. Neural Network Control of


Dynamical System
The control of a dynamical system, such as
induction motor vector drive, by an AI
technique is normally defined as intelligent
control. Although all the branches of AI have
been used for intelligent control, only NNWbased control will be covered in this section.

52

Inverse Dynamics-Based Adaptive Control


After satisfactory training and
testing, the NNW represents the
inverse dynamical model of the
plant.
This NNW-based inverse model
can be placed in series as a
controller with the actual plant
so that the plant forward
dynamics is totally

The identification of forward dynamical model


of plant has been discussed so far. It is also
possible to identify the inverse plant model
by training. In this case, the plant response data
y(K) is impressed as the input of the NNW, and
its calculated output is compared with the plant
input which is the target data. The resulting
error (K) trains the network as shown so that
(K) falls to the acceptable minimum value.

53

Fig. 9. Inverse dynamic model-based adaptive control of a


plant.

54

Model Referencing Adaptive Control (MRAC)


I. MRAC by neural network(Direct method)

II. MRAC by neural network(Indirect method)

The plant output is desired to track


the dynamic response of the
reference model.
The reference model can be
represented
by
dynamical
equation(s) and solved in real-time
by a DSP.
The error signal (K) between the
actual plant output and the reference
model output trains the NNW
controller online so as to make the
tracking
error zero.
The plant
with the NN has the
response which is identical to that
of the ref. model.
Problem of direct method:
plant lies between controller and
error &there is no way to propagate
error backward in controller by
error back propagation training.
Problem is solved by indirect
method.

NN identification model is first generated to emulate forward model of the plant. This model is then placed in series 55
with the
NN controller (instead of actual plant) to track ref. model. The tuning of NN controller is now convenient thro NN model.

56
Fig. 11. Training of neural network for emulation of actual controller.

Nonlinear ARMA(NARMAX)
ARMAX model:
ny

nu

ne

i 1

i 1

i 1

y (k ) yi y (k i ) ui u (k i ) ei e(k i ) e(k )

If we assume the system to be noise free:


ARMA
A wide range of nonlinear systems can be
represented as:
y ( k 1),......, y ( k n y ),u ( k 1),.......u ( k nu ),
y (k ) f
e( k )
e(k 1),.....e(k ne )

NARMAX(when
outputs assumed to be error free:
57
NARX)

Nonlinear System Modelling

Using NNs we can develop


approximation to the nonlinear
mapping of the NARX model
Any Class of NNs that is capable of
nonlinear mapping approximations
n
can be used
to learn the function
y (k ) i y (k i ) f u u (k 1), u (k 2),..u (k nu ) e(k )
M1:
f(.)
i 1
According
M2:
y (k ) f y (kto
1),Narendra
y (k 2),.. y (k n &
) B u (k i ) e(k )
Parthasarathy a NL system can be
y (k ) f y (k as
1), y (a
k 2),..
y (k n ) f of
1), u (k 2),..u (k n ) e(k )
u (k the
M3identified
member
following
y (k ) f models:
y (k 1), y (k 2),.. y (k n ), u (k 1)....u (k n ) e(k )
M4
y

ny

58

i 1

Function Approximation by
ANN
The two main characteristics associated with MLP are the universal
approximation ability and back propagation learning.
There are three milestones in ANN history.
(1) In 1950s, the single layer perceptrons had been proposed to
mimic the nerve system. However the linear feature made it
difficult to generalize for nonlinear systems. The MLP were
proposed but it was unclear how to carry out parametric learning
in a highly nonlinear structure.
(2) The learning problem was solved in 1986 by the back-propagation
learning algorithm, presented in well known book Parallel
Distributed Processing by Rumelhart, Hinton and Williams.
(3) In 1991, three proofs were given to show the function
approximation property with MLP. The 3rd milestone is particular
importance to control researchers, as they can use ANN to
59 about the
address any nonlinear problems without worrying

Identification Using Neural


Network
The use of neural networks emerges as a feasible solution for
nonlinear system identification
The universal approximation properties make them a useful tool
for modeling nonlinear systems.
The neural networks such as MLP and RBF give successful
results with Evolutionary algorithm.
Most useful approaches to non-linear system identification is
based on NARMAX modeling.
There are two structures for this identification
Parallel configuration
Parallel-Series configuration

60

Identification of Nonlinear System


using MLPNN

61

Parallel-Series NARX SI approach


using RBFNN

62

Block diagram
representation of
nonlinear system
identification and control
using
artificial neural networks
63

Identification of nonlinear plants


using neural networks
y p (k )

u (k )

Nonlinear Plant
y p (k )

TDL

TDL

+
Neural Network

y p (k 1)

z-1
64

ei (k )

Indirect adaptive control using neural


networks
Reference
Model
Neural
Network
Identifier

r (k )

ym (k )

y p (k 1)

y p (k )
z-1
+

TDL

TDL
Neural
Network
Controller

u (k ) Nonlinear

y p (k )

Plant

TDL

TDL

65

ei (k )

ec (k )

Application to a DC motor problem


ANN

can be trained to emulate the unknown


nonlinear plant dynamics by presenting a suitable
set of input/output patterns generated by the plant.
ANN based identification and control topology using
MRAC platform for trajectory control of a DC motor
is presented.

DC Motor Model

The DC motor dynamics are given


by the following two equations:
Kp (t) R aia L a [dia (t)/dt] Vt (t)

Kia (t) J[dp (t)/dt] Dp (t) TL (t)


66

where
p (t)
Vt(t)
ia(t)
TL(t)
J
K
D
Ra
La

rotor speed
terminal voltage
armature current
load torque
rotor inertia
torque & back emf constant
damping constant
armature resistance
armature inductance

The load torque TL(t) can be expressed as


TL(t) =

(p)

67

Discrete time DC motor


model
TL(t) =
p2(t) [sign(p(t))]
where is a constant.
The discrete time model speed
equation
governing the system dynamics is
given by
p(k+1) = p(k) + p(k-1)
+ [sign(p(k) )] p2(k)
68

Identification of DC motor
model
The speed equation can be manipulated to
the form
Vt(k) = g [p(k+1) , p(k) , p(k-1)]
where the function g[.] is given by
g[p(k+1), p(k), p(k-1)] = {p(k+1) -
p(k)
2
-

(k-1)
[sign(
(k)
)]

p
p An ANN is
p (k)
and is assumed
to be unknown.

trained
emulate (k)
the unknown
function g[.]
- to
[sign(
)] 2(k-1)}/
p

69

However as p(k+1) cannot be readily


available the voltage equation is again
manipulated as given below
V (k 1)
t

= N [p(k) , p(k-1) , p(k-2)]

70

The structure of the ANN


identifier

ei (k 1)

p ( k 2)
Vt (k 1)
Neural network p (k 1)
z
Identifier

z-1

z-1

Vt (k )

-1

DC Motor

p (k )
71

Trajectory control of DC motor


using trained ANN
The objective of the control system is to
drive the motor so that its speed p(k),
follows a pre-specified trajectory, m(k).
The following second order reference model is
chosen
m(k+1) = 0.6 m(k) + 0.2 m(k-1) + r(k)
For a given desired sequence {m(k)}, the
corresponding control sequence {r(k)} can
be calculated using above relation.
72

ANN structure showing both


identification & control of DC motor
Reference model

r (k )
+

ei (k 1)

p (k 1)
z-1

ANN
Controller

Vt (k 1)

p ( k 2)
z-1

ANN
p (k 1)
Identifier
z-1

ec (k )

z-1

m (k )

Vt (k ) DC Motor

T
m

p (k )

z-1
73

73

Tracking performance of a
sinusoidal reference track

74

74

NEURAL NETWORK CONTROL

B. Subudhi and S.S. Ge, Sliding mode Control and observer based slip ratio control of Electric
and Hybrid Electric Vehicles, IEEE Trans. on Intelligent Transportation System, vol.13, no.4,
75
pp.1617-1626,2012

Performance of ASMC and NN on a slippery road.


(a) Braking torque. (b) Slip. (c) Estimation error. (d)
Required voltage

76

Identification of nonlinear plants


using neural networks
y p (k )

u (k )

Nonlinear Plant
TDL

TDL

ei (k )

y p (k )

+
Neural Network

y p (k 1)

z-1
77

77

Indirect adaptive control using neural


networks
Reference
Model
Neural
Network
Identifier

r (k )

ym (k )

y p (k 1)

y p (k )
z-1
+

TDL

TDL
Neural
Network
Controller

u (k )

TDL

Nonlinear
Plant

y p (k )

ei (k )

ec (k )

TDL
78

78

Case Study:
Application to a DC motor problem
ANN can be trained to emulate the unknown
nonlinear plant dynamics by presenting a suitable set
of input/output patterns generated by the plant. ANN
based identification and control topology using MRAC
platform for trajectory control of a DC motor is
presented.

79

DC Motor Model
The DC motor dynamics are given by
the following two equations:

Kp (t) R aia L a [dia (t)/dt] Vt (t)


Kia (t) J[dp (t)/dt] Dp (t) TL (t)

80

where
p (t)
Vt(t)
ia(t)
TL(t)
J
K
D
Ra
La

rotor speed
terminal voltage
armature current
load torque
rotor inertia
torque & back emf constant
damping constant
armature resistance
armature inductance

The load torque TL(t) can be expressed as


TL(t) =

(p)

81

Trajectory control of DC motor


using trained ANN

The objective of the control system is to


drive the motor so that its speed p(k),
follows a pre-specified trajectory, m(k). The
following second order reference model is
chosen
m(k+1) = 0.6 m(k) + 0.2 m(k-1) + r(k)
For a given desired sequence {m(k)}, the
corresponding control sequence {r(k)} can
be
calculated using above relation.

82

The speed at k+1th time step can be


predicted
from
p (k 1)

= 0.6 p(k) + 0.2 p(k-1) + r(k)


This result can be fed to the ANN to estimate
the
(k)
time
V
1) ,
control t input =
atNkth[
using
p (k step
p(k) , p(k-1)]

mT
The matrix
corresponds to the
reference
model coefficients [0.6 0.2]

83

THANK YOU

84

Anda mungkin juga menyukai