ture Notes
in
Heinz A Preisig
Chemi
al Engineering
Norwegian University of S
ien
e and Te
hnology (NTNU)
7491 Trondheim, Norway
Heinz.Preisig hemeng.ntnu.no
1 Synopsis 13
3
4 CONTENTS
5 Stability 93
Synopsis
Modelling is in the
entre of almost any a
tivity asso
iated with engineering and
s
ien
e. It is thus not surprising that the term modelling is used in a variety of
ontexts and used for many dierent things. Here we shall refer to modelling as
the pro
ess of generating a mathemati
al
onstru
t that mimi
s the behaviour
of the pie
e of world being modelled. The pie
e of world
an nearly be anything,
a pro
essing plant, any part thereof, in any detail, a living spe
ies, mi
robes,
green plant, a pie
e of ro
k, te
toni
plate, really anything that exists, but also
any arti
ial obje
t su
h as an algorithm, a program, to mention just two.
The task of generating a mathemati
al model may be split into three primary
domains (Figure 1.1):
2. Model simpli
ation: here, the model delity is adjusted. This adjust-
ment is in all
ases a simpli
ation. Model renement we
onsider be-
ing part of the primary domain. Simpli
ations are of the type to im-
plement additional time s
ale and length s
ale assumptions. Often they
are order-of-magnitude assumptions, whi
h lead to simpli
ations of the
model. Additionally pure mathemati
ally motivated simpli
ations may
13
14 CHAPTER 1. SYNOPSIS
primary modelling
S1,1 M1,1
T1 S1,2
M1 M1,2
S1,3 M1,3
plant
T2 model simpli
ation
S2,1
experiment M2 M2,1
S2,2
M2,2
S2,3
S2,2,1
results M2,2,1
S2,2,2
solver
predi
tions M2,2,2
Models
ome in many dierent avours: me
hanisti
des
riptions that are based
on the prin
iples that form the foundation of s
ien
e, mathemati
al
onstru
t
that
apture a
ertain part of the nature of a natural system. The former
is often referred to as a white box model indi
ating that one
an see the
me
hani
s of the box whilst the latter are referred to as bla
k boxes, as there is
no real me
hanisti
thought behind the formulation of the mathemati
al obje
t
representing the modelled systems behaviour. Both boxes do nearly never exist
15
in a pure form, but most often one is makes use of a
ombination of the two
approa
hes. Often the reason is simple that one does not know enough about
the me
hani
of the pro
ess or it is far to
ompli
ated for the intended use.
16 CHAPTER 1. SYNOPSIS
Chapter 2
17
18 CHAPTER 2. MAPPING THE WORLD
by the pro
ess is
hosen su
h that it in
ludes all parts relevant for the des
ription
of its behaviour in
luding all those parts of the environment that intera
t with
the pro
ess. The arguments on how to subdivide are in parts rather subtle and
we will have to take up this subje
t on several o
asions.
R3
environment R2
C H
D
B G
plant A E F
R1
Figure 2.1: The pro
ess-relevant part of the Universe and its dis-
se
tion into
ontrol volumes.
The most
ommon argument for subdividing the plant and its environment is
based on pro
essing units or more physi
ally motivated: phases. If we take a
little distan
e from plants and look more generi
ally at pro
esses then taking the
phase as an argument seems a natural
hoi
e. Obviously we then are fa
ed with
the di
ulty of dening the phase boundary. The ma
ros
opi
view implies that
the world is a
ontinuous system, both in time and spa
e. Thus all
hara
terising
variables, namely the state variables, are
ontinuous, but how about a phase
boundary? One
ould view it as a domain in whi
h the states
hange very rapidly
in a small transition domain between the two adja
ent phases, a thought on
whi
h people like Gibbs have exer
ised for a denition. Though most of us would
not spend a thought on the issue and dene the boundary as a surfa
e. Reason
for pi
king up this subje
t here is to demonstrate that already at this very early
stage in the modelling pro
ess one makes not only length s
ale assumptions,
whi
h eliminate the granular nature of matter, but also the taken ma
ros
opi
view denes dis
ontinuities in the spa
e when dening phases. Figure 2.1
This division pro
ess is thus an abstra
tion, whi
h makes assumptions about
the nature of the pro
ess: rst of all
ontinuity of all
onserved quantities with
matter, energy and momentum being the main ones (Se
tion 2.4.1.1) and dis-
ontinuities of the intensive quantities dening the phase boundaries. The term
ma
ros
opi
makes thus referen
e to the length s
ales being
hosen. As we fo-
us on ma
ros
opi
systems, we will primarily model systems with length s
ales
being signi
antly larger than the mole
ular dimensions. However this is a
matter of
hoi
e and there here are no prin
iple reasons for restri
ting things to
ma
ros
opi
systems ex
ept than the appli
ation domain being
hosen for the
exposition here.
The relative time
onstant of ea
h basi
system with respe
t to the driving
inputs may serve as the
riteria for the division, that is, the model designer
has to make a judgment of the ows and the
apa
itan
e ee
ts as well as the
relative dynami
s in the ows a subje
t that needs a more thorough dis
ussion,
whi
h is delayed to later (Se
tion 2.1.4).
Based on this view, one is lead (tempted) to abstra
t the plant as is shown in the
2.1. MAPPING NATURE 19
R2 R3
G R2 R3
C G
D
C
D F
B F b2
b2 B b1 b3
A b3 E
b1 E A
R1 R1
Figure 2.2: Pulling apart makes the interfa
es more visible; then
the abstra
tion is extended to a set of
onne
ted systems.
two gures Figure 2.1 and Figure 2.2. In this pi
ture the plant is divided into
a set of volumes {A,B,C,D,E,F,G}, ea
h shown as a
ir
le, and three surfa
es
{b1 ,b2 ,b3 } that
ommuni
ate extensive quantities, as indi
ated by the arrows.
The boundary surfa
es, that have no
apa
ity to a
umulate extensive quantity,
are introdu
ed as
onne
tions. The fa
t that these
onne
tions have no
apa
ity
is essential for the understanding of the model.
reservoir
steady state
lumped system
1D 2D 3D
The des
ription of the behaviour of the pro
ess is then given by the stru
ture of
the topology and the assembly of the behaviours of the two basi
omponents.
2.1. MAPPING NATURE 21
The systems' des
riptions are based on the dynami
balan
es of the
onserved
quantities, whi
h dene the state of the system. Sin
e a
apa
ity is mostly
asso
iated with mass whi
h o
upies a nite volume, literature uses frequently
the term
ontrol volume in this
ontext. We take a more generi
view in that
apa
ity ee
ts are not limited to mass or volume but may also be asso
iated
with other obje
ts su
h as surfa
es and points; in fa
t what will evolve below
an be generalized to any kind of abstra
t system and
onserved quantities
(Se
tion 2.2.1).
One may argue that one should then simply make the dynami
window as large
as probably possible to avoid any problems, whi
h implies an in
rease in
om-
plexity by growing the limits to innity ultimately embra
ing the whole of the
universe. Philosophi
ally all parts of the universe are
oupled, but this ultimate
model is not a
hievable. When modelling, a person must make
hoi
es and
pla
e fo
al points, both in spa
e as well as in time. The purpose for whi
h the
model is being generated is thus always
ontrolling the generation of the model
((Apostel, 1960), (Aris, 1978)) and the modeller, being the person establish-
ing the model, is well advised to formulate the purpose for whi
h the model is
generated as expli
it as possible.
Thus one of the main
hoi
es to be made is the window in the time s
ales, whi
h
must thus be pi
ked in advan
e. For pra
ti
al reasons it will be dierent from
zero and innity: On the small time/length s
ale one will ultimately enter the
zone where the granularity of matter and energy
omes to bear, whi
h limits the
appli
ability of ma
ros
opi
system theory and at the large end, things get quite
qui
kly infeasible as well. Whilst this may be dis
ouraging, having to make a
hoi
e is usually not really imposing any serious
onstraints, at least not on the
22 CHAPTER 2. MAPPING THE WORLD
large s
ale. Modelling the movement of te
toni
plates or the material ex
hange
in ro
ks asks
ertainly for a dierent time s
ale than modelling an explosion,
for example. There are though
ases, where one tou
hes the limits of the lower
s
ale, that is, when the parti
ulate nature of matter be
omes apparent. In most
ases, however, a model is used for a range of appli
ations that quite denitely
also dene a time-s
ale window.
The dynami
s of the pro
ess is ex
ited either by external ee
ts, whi
h in gen-
eral are
onstraint to a parti
ular time-s
ale window or by internal dynami
s
resulting from an initial imbalan
e or internal transposition of extensive quan-
tity. Again, these dynami
s are usually also
onstraint to a time-s
ale window.
The maximum dynami
window is thus the extremes of the two kinds of win-
dows, that is, the external dynami
s and the internal dynami
s. A good model
is balan
ed within its own time s
ales and the time s
ale within whi
h its envi-
ronment operates.
ux ve
tor δ ϕ̂
boundary Ω
r1
Origin of the xed observer
2. the ow a
ross the boundary due to the movement of the boundary with
velo
ity vΩ :
Z Z
ϕ̂Ω := − δ ϕ̂T ω dΩ − δΦS∨E vTΩ ω dΩ . (2.3)
Ω Ω
The se
ond term maps the velo
ity onto the external normal ve
tor of the surfa
e
to obtain the normal ow per unit surfa
e, whi
h then is weighted with the
density of the domain into whi
h the boundary expands. The notation of δΦS∨E
indi
ates the dependen
y of the density on the sign of the movement of the
boundary, if one allows for the density to
hange dis
ontinuously at the boundary
as the boundary may represent a moving phase boundary. All velo
ities are
measured relative to a xed observer in spa
e as indi
ated in Figure 2.5
Ex
luding the
ase where a
umulation is o
urring in the boundary itself, as
this may be the
ase with ele
tri
al
harges, the a
umulation term is:
d V (t)
Z
Φ̇S := δΦS dV . (2.4)
dt 0
With the volume being:
Z tZ
V (t) := vT ω dΩ . (2.5)
o Ω
Expanding the a
umulation term using the generalized Leibnitz rule (Se
-
tion 7.2.1):
d V (t) ∂δΦS
Z Z
δΦS dV := −δΦS∨E V̇ (t) + dV , (2.6)
dt 0 V ∂t
∂δΦS
Z Z
:= − vT ω dΩ δΦS∨E + dV , (2.7)
∂t
ZΩ ZV
∂δΦS
:= − δΦS∨E vTΩ ω dΩ + dV . (2.8)
Ω V ∂t
with vΩ being the velo
ity ve
tor of the boundary relative to the stationary
o-ordinate system. Substituting these terms in the
onservation law yields
∂δΦS
Z Z Z
dV − T
δΦS∨E vΩ ω dΩ := − δ ϕ̂T ω dΩ −
V ∂t Ω Ω
Z Z
− δΦS∨E vTΩ ω dΩ + NST δ ϕ̃S dV ,
Ω V
whi
h redu
es to:
∂δΦS
Z Z Z
dV := − δ ϕ̂T ω dΩ + NTS δ ϕ̃S dV , (2.9)
V ∂t Ω V
The surfa
e integral over the ow may be transformed into a volume integral
by applying Gauss' divergen
e theorem to ea
h term:
T !T
∂δΦS ∂
Z Z Z
T
dV := − δ ϕ̂ dV + NS δ Φ̃S dV , (2.10)
V ∂t V ∂r V
2.3. INTERNAL DYNAMICS 25
The ve
tor ∂
∂r := ∂
∂ri is the gradient operator.
∀i
T !T
∂δΦS ∂
:= − δ ϕ̂ + NTS δ ϕ̃S , (2.11)
∂t ∂r
The same result is obtained using the small box approa
h (Lin and Segel, 1988)
in the
hange of ow from one end of the small box to the other end is obtained
as the rst variation. Often this is also
alled the shell balan
e (see Bird et al.,
2001; Sears, 1963).
ri
equations. Dening the arbitrary spe
ies as Ai ∈ A, latter being the set of
spe
ies, one denes the rea
tion symboli
ally with the equation:
0 := NT A . (2.12)
ϕ̃ := V NT η̃ , (2.13)
Where T is the temperature and c is the
on
entration ve
tor usually the mass
normed by the volume.
The fun
tion η̃r (c) is often of the form:
γ
Y
η̃r := kr (T ) cj rj , (2.15)
∀j
whereby the γ is usually the respe
tive stoi
hiometri
oe
ients, though not
ne
essarily. Con
entrating this up into matrix form, one may write:
The K is the diagonal matrix of the rea
tion
onstants the rest being
aptured
in the ve
tor of fun
tions g(c). The idea behind this model is that the mole
ules
have to meet in order to undergo rea
tion. The probability to meet is then
related to the density being the number of mole
ules in a volume. Choosing a
statisti
al argumentation though yields a dependen
y on the mole fra
tion and
not the
on
entration (Denbigh and Turner, 1971) but in pra
ti
e it is observed
that the molar
on
entration provides a better mat
h. Whilst this is the most
ommonly used des
ription for simple rea
tion kineti
s, there exist many more
for more
ompli
ated
ases.
being presently
onverted and eje
ting the
orresponding amount of produ
ts.
Both views are presented in the literature independently. We usually take the
rst viewpoint, though a
hange of the representation ree
ting the se
ond
approa
h is readily a
hieved by splitting the
omponent mass balan
es into two
separate balan
es one that ree
ts the hydrauli
and a pseudo-steady state one
for the rea
tion system.
The literature often gives the rea
tion dynami
s in form of a rate of disappear-
an
e or appearan
e of a spe
ies per unit volume
νri ′
r̃ri := k g(c) . (2.17)
|νri | ri
It is up to the user then to s
ale it properly with the stoi
hiometri
oe
ients:
ϕ̃ νri
:= r̃ri . (2.18)
V |νri |
The expression used here assumes appropriate s
aling of the rea
tion
onstants,
thus
′
kri
kr := . (2.19)
|νri |
This removes the dependen
y of the rea
tion
onstant on the spe
ies being taken
as the referen
e in the denition of the rea
tion rate.
The se
ond
omment relates to the term
onstants being used in this
ontext.
The rea
tion
onstant is a strong fun
tion of the temperature. The standard
model is known as the Arrhenius law, that is:
kr (T ) := kro e−EAr /RT . (2.20)
temperature, volume and pressure, surfa
e area and surfa
e tension, molar mass
and
hemi
al potential, but also ele
tri
al eld and dipole moment et
. He also
denes the natural variables of the energy E , whi
h are extensive quantities
in
luding entropy S , volume V , spe
ies mass n, dipole moment P , magneti
moment M , surfa
e S , length s, to mention the main ones (Alberty, 1994).
Through these developments the transfer des
ription was tied to one parti
u-
lar fundamental extensive quantity that provides a measure for the transport
potential, whi
h in natural system is the energy.
The
ontinuity
onditions are a ree
tion of the basi
assumption in ma
ros
opi
eld theory, namely the
ontinuity of the fundamental extensive quantity in the
spatial domain. Let the internal energy U and the
omponent mass n be the
fundamental extensive quantity energy, then ma
ros
opi
eld theory assumes
ontinuity of the fundamental extensive quantities in the spatial domain, that
is, the length s
ale is so large that the quantisation of mass and energy is not
observable and the world appears as a
ontinuous entity in the spa
e it o
upies.
Writing the internal energy as a fun
tion of its natural extensive variables S, V, n,
whi
h in turn are a fun
tion of the spatial
o-ordinate r : E(S(r), V (r), n(r)),
the spatial derivative of the internal energy is:
∂U ∂U ∂S ∂U ∂V ∂U ∂n
:= + + . (2.21)
∂r ∂S ∂r ∂V ∂r ∂nT ∂r
With the extensive quantities being
ontinuous in the spatial spa
e, also all the
∂S , ∂V , ∂nT in the expression are
ontinuous.
other partial derivatives, namely ∂U ∂U ∂U
They are also of intensive nature (Euler degree zero) and are point properties
of the system. Their gradient is the driving for
e for the transfer of the
onju-
gated extensive quantity. These quantities are thus spe
ial intensive quantities,
whi
h, be
ause they drive the extensive quantity transfer: The rst of the above-
dened partial derivatives are the thermodynami
temperature; the se
ond is
the pressure and the third the
hemi
al potential 4 :
∂U
T := (2.22)
∂S V,n
∂U
p := (2.23)
∂V S,n
∂U
µ := . (2.24)
∂nT V,S
In the
ase where dierent elds are present and a
ting on for example mass,
su
h as an ele
tri
al eld and a
on
entration eld, all for
es must be
onsidered
4 Note the di
ulty in nomen
lature of these quantities: whilst energy is a potential, the
hemi
al potential is really not a potential, but
hemi
al potential and
omponent mass form
a
onjugate pair...
2.4. EXTENSIVE QUANTITY TRANSPORT 29
in the above denition5 . Thus
are must be taken when dening the driving
for
es as the number of natural variables in
reases. In these
ases Legendre
transformations (Modell and Reid, 1974) are used to dene new energy fun
-
tions su
h as a modied Gibb's free energy, whi
h when dierentiated, dene
the modied
hemi
al potential. The arti
le of Alberty (Alberty, 1994) and
the book of Guggenheim (Guggenheim, 1967) provide a detailed dis
ussion on
this issue. With the potentials and the
onjugated intensive quantities being
thus
ontinuous in the spatial domain, they are also
ontinuous at the system
boundaries, a property, whi
h is of parti
ular interest at phase boundaries. As
one of the
onsequen
es also the ux is
ontinuous as we shall see below.6 (Se
-
tion 2.4.2.1)
Dening π being the driving for
e, thus the
ondjugate to the potential, the
ux tensors for the most
ommon transfer laws are7 , 8
For an anisotropi
medium with
onstant
ondu
tivity λ ::
onstant:
∂
δ ϕ̂ := −λ π, (2.25)
∂r
For an isotropi
medium with
onstant
ondu
tivity λ ::
onstant:
∂
δ ϕ̂ := −λ π. (2.26)
∂r
For an isotropi
medium with variable
ondu
tivity λ(r): 9 :
∂ T T
δ ϕ̂ := − π λ , (2.27)
∂r
These transfer laws are not a basi
law but represents a simpli
ation of the
behaviour of the transfer system. It should thus be seen as bla
k-box model.
All the transfer law have though two basi
ommon properties. Let ∇π be
the gradient of the driving for
e and the ux as fun
tion of the gradient being
denoted by δ ϕ̂(∇π) then
δ ϕ̂ (0) := 0 . (2.28)
Thus there is no extensive quantity being transferred if there is no driving for
e.
Additionally
δ ϕ̂ (sign (∇π)) := sign (∇π) δ ϕ̂ (|∇π|) . (2.29)
The gradient of the driving for
e determines the dire
tion of the ow.
These basi
transfer laws des
ribe the transfer within a system, thus are part
of the des
ription of a distributed system.
5 For example, mass may also be transferred due to magneti
and ele
tri
al elds
if the mole
ules or parti
les are
harged ((Deen, 1998), (Groot and Mazur, 1983),
(Wesselingh and Krishna, 2000)).
6 Another
ontinuity proof is given in (Tisza, 1966), p133.
7 Note, in order to avoid some di
ulties in the interpretation, the ux is written as a
tensor (matrix) also if it redu
es to a ve
tor.
8 the quantities pro
eeded by a δ are point properties, thus intensive quantities that measure
density in one or the other form. The intensity is the respe
tive extensive quantity normed
by the volume.
9 Notation: The grad operator is here not used in transposed form, thus the representation
of remainder of the expression is adjusted a
ordingly.
30 CHAPTER 2. MAPPING THE WORLD
Webster denes phase in the
ontext of physi
s as a homogeneous, physi
ally dis-
tin
t, and me
hani
ally separable portion of matter present in a non-homogeneous
physi
o
hemi
al system. This denition builds on the
on
ept of me
hani
al
separation of phases, whi
h implies a dieren
e in their property that
an be
exploited for their separation.
The
on
ept of phase may be expanded by allowing for pseudo phases, that is,
spatially averaged phases:
Besides the
ontinuity
ondition dis
ussed above, the ux
ondition is essential
for the solution. Balan
ing about a phase boundary is of parti
ular interest.
The integral balan
e for a
ontrol volume (System S := a + b) pla
ed about the
phase boundary (Figure 2.6) reads
A system b
I phase boundary
phase β
s ϕ̂−ε ϕ̂+ε
D
s B
phase α phase β
system a J
phase α
C
phase boundary Ωa+b
The overall
ontrol volume splits into two parts, namely a part (system a) on
one side of the phase boundary and one part (system b) on the other side of
the boundary. The surfa
e of the overall
ontrol volume is split into six se
tions
namely the two running along the phase boundary, ΩAB , ΩCD and four small
pie
es for the two edges
rossing the phase boundary: ΩBJ , ΩJC , ΩDI , ΩIA :
Z Z Z
Φ̇a+b := − δ ϕ̂T ω dΩ − δ ϕ̂T ω dΩ − δ ϕ̂T ω dΩ
ΩIA ΩAB ΩBJ
Z Z Z
+ δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ
ΩJC ΩCD ΩDI
Z
T
+ Na+b δ ϕ̃a+b dV . (2.31)
V
Letting the distan
e s of the boundary ΩAB and ΩCD approa
h zero, the in-
tegrals over the edges disappear and
learly, the a
umulation term also ap-
proa
hes zero, if there is no a
umulation in the boundary itself. The transposi-
tion term redu
es to the transposition taking pla
e in the boundary itself, su
h
as ee
ts asso
iated with the phase
hange.
Z Z Z
0 := − δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ + Na+b
T
δ ϕ̃a+b δ(s − 0) dV
ΩAB ΩCD V
The
ontinuity
onditions give rise to the jump
onditions at the phase bound-
ary: Let π−ε and π+ε be the
onjugate to the potential left and right of the
boundary, then the
ontinuity
onditions states that
whi
h assumes ideal solutions for the
omponent A in both phases, xs,A being
the mole fra
tion. The Nernst distribution
onstant gives the ratio of the two
on
entrations in the phases:
o
µb,A − µoa,A
xa,A
:= exp (2.39)
xb,A RT
whi
h uses the fa
t that the temperature is equal at the boundary (
ontinuity
ondition of the potentials at the surfa
e).
2.5. VARIABLES: NATURE, ROLE AND TRANSFORMATIONS 33
are.
Let us dene a fundamental state spa
e
onsisting of the
onserved quantities
required to
apture the behaviour of the primitive system, thus x := Φ and the
dierential
onservation (Equation (2.11)) and its integral:10
The integral denes also a new set, namely the initial
onditions x(0) := Φ(0).
Assuming that the initial
onditions are given, thus adding the equations:
The balan
e equations dene two new sets of variables, namely the ows x̂ := ϕ̂
and the transposition x̃ := ϕ̃ of extensive quantities.
The next step in building the model is to dene the transport of extensive
quantity and thereafter the transposition. The transport is always linking two
systems together. Thus we introdu
e the dire
ted ow from system a to system
b as
with pa|b denoting the properties of the physi
al transport system. The ow
equation introdu
es a set of new variables, namely the driving for
es and the
properties asso
iated with the interfa
e with respe
t to the transferred quantity.
The driving for
es are a fun
tion of the state of the respe
tive system and thus
state dependent information. To signify this fa
t we introdu
e the term of a
se
ondary state symbolised by the
hara
ter y . The driving for
es take a spe
ial
role in the formulation of the model (Page 28).
In
ontrast to the transport, the transposition takes pla
e inside the system and
is thus only a fun
tion of the state of the system in whi
h it takes pla
e:
Also the transposition is driven by a set of variables that are a fun
tion of the
state of the system. A set of property variables ps enable to
hara
terise the
transposition in its given environment.
Both the transfer and the transposition introdu
es a state-dependent set of
variables, whi
h we termed se
ondary state variables. These variables must be
a fun
tion of the state, thus the type of equations determining them are of the
generi
form:
ys := ys (ys , xs , ps ) . (2.45)
10 Note that the notation for the fun
tion is abbreviated in that whenever it is meaningful
the fun
tion name is identi
al with the variable name, whi
h improves readability.
2.5. VARIABLES: NATURE, ROLE AND TRANSFORMATIONS 35
This set of equations is in general impli
it, though most of the equations are
expli
it in the se
ondary state variables. It is mostly the temperature, whi
h
is appearing impli
itly in the equations. What, though, must be the
ase is
that this set of equations must be a mapping from the fundamental state spa
e
spanned by the x-variables. It must thus be possible to solve the above set of
equations for ys :
ys := ys (xs , ps ) . (2.46)
If it is not possible analyti
ally, whi
h is nearly only the
ase if it is a linear
fun
tion of the temperature, the equations must be solvable numeri
ally. The
reader should note that the solutions may not always be unique and sele
ting the
orre
t solution may not be a trivial task. Equations of state are obje
ts that
are posing this problem notoriously. Whilst this is not a prin
iple stru
tural
issue, it
ertainly is a pra
ti
al one and it should be kept in mind even though
we shall ignore it for the time being.
The properties being introdu
ed as additional quantities must also be a fun
tion
of the state, very similarly to the se
ondary state variables. In fa
t one
ould
merge them with the se
ondary state, would there not very often be a spe
ial
signi
an
e asso
iated with them. Thus:
Θi := given , (2.48)
The last two equations are a set of ordinary dierential equations and its integral
over time, respe
tively, a standard initial value problem.
The two keys to the formulation of the model are thus the
hoi
e of fundamental
variables and the respe
tive
onservation prin
iples and the mapping of the
se
ondary state variables from the fundamental state that is dened in the rst
step. The
hosen formulation is independent of the number of systems involved
in the des
ription, as the substitution always involves only two adja
ent systems
and
an thus be done re
ursively over the number of systems. As we shall see
later, this is also the
ase when making order-of-magnitude assumptions. This
lo
ality prin
iple does not break down with making simplifying assumptions if
handled properly.
36 CHAPTER 2. MAPPING THE WORLD
with cE|S being the overall heat transfer
oe
ient times the transfer area, and
the system volume work term:
ŵS|E := −p V̇ . (2.65)
whi
h is expli
it the
ase where the state variable transformation is expli
ite
in the primary state variable, here the enthalpy. The transformation thus fol-
lows the se
ond s
heme. Indeed the required invertibility
ondition is satised.
Following the re
ipe the dierentiation of the variable transformation gives:
Note that Cp (TS ) is the total heat
apa
ity here. Substitution yields:
−cE|S
ṪS := (TS − TE ) . (2.70)
Cp (TS )
ẋ := f (y) , (2.71)
y := g(x) . (2.72)
where x is the ve
tor of
onserved quantities and y the ve
tor of all algebrai
quantities dened in the model.
38 CHAPTER 2. MAPPING THE WORLD
z := T x . (2.73)
ż := T ẋ := T f (y) , (2.74)
y := g(T −1
z) . (2.75)
Sin
e there are innite many transformation matri
es that satisfy the invert-
ibility
ondition an equal number of equivalent state spa
e representations are
possible just through linear transformation.
PDE PDE
steady-state network
When fra
tioning the overall volume into smaller volumes, one generates surfa
e
elements that separate adja
ent systems. In most
ases, one is not interested
in the ux, but rather in the total ow a
ross su
h a surfa
e element, whi
h is
one reason for whi
h one lumps the boundary. Se
ondly, one may have more
than one type of intera
tion between two adja
ent systems, for example there
may be a heat ow through a non-porous physi
al wall and ow through an
opening in the same physi
al wall, whi
h allows the two systems to intera
t
via heat transfer through the wall and mass transfer through the hole. The
lumping thus primarily splits the boundary into lo
al boundary elements that
may be
lassied with regard to the type of extensive quantity being transferred,
a
on
ept that is dire
tly
oupled to the typed thermodynami
walls (open,
losed, adiabati
, et
.).
Ω2
Ω1 Ω3
r3
r2 system S
Ω4
r1 Ω6 Ω5
Origin of the xed observer
The
umulative ow through a pie
e of boundary is simply the integral over the
respe
tive boundary element Ωi :
Z
ϕ̂Ω := δ ϕ̂T ω dΩ , (2.76)
i
Ωi
This integral measures the ow in the dire
tion relative to the normal ve
tor
of the boundary, where by
onvention the normal ve
tor points away from the
boundary. In the abstra
tion pro
ess, the systems are pi
torially pulled apart
and represented as
ir
les, or other graphi
al obje
ts depending on the type of
system (Figure 2.2).
The ow through the
ommon pie
e of boundary between two systems is mapped
into a
onne
tion, whi
h introdu
es a unique
oordinate system against whi
h
the a
tual ow between the
onne
ted systems is measured. This information
is
aptured in a notation < a > |Ωi | < b > where < a > is the pla
e holder
for the system where the origin of the referen
e
o-ordinate system is pla
e and
< b > is the system at the other end, whilst the
ommon boundary pie
e Ωi
is pla
ed between two verti
al bars on either side guarded by the two systems
2.6. SECONDARY ASSUMPTIONS 41
(Figure 2.13). The referen
e
o-ordinate, being introdu
ed for ea
h
onne
tion,
is denoted by α ∈ {−1, 0, +1} where the +1 indi
ates a head of a
onne
tion
arrow, a −1 a respe
tive tail and a 0 no ow. Obviously, a ow must always
be dened between two systems, that is, ow may not just disappear or appear
into or from the void. The sum of the
ontrol volumes is thus always
losed
representing the pro
ess-relevant (Se
tion 2.1.1).
The integral balan
e equation for a system with stationary boundaries, that
is vΩ := 0 reads more
ompa
tly when lumping the ows for the boundary
elements:
∂δΦS
Z X
dV := αc ϕ̂c + NTS ϕ̃S ,
V ∂t
∀c
a blo
k diagonal matrix with identity blo
ks weighted with the respe
tive refer-
en
e
oordinate and
h i
ϕ̂s := ϕ̂c , (2.79)
∀c
a sta
k of all ow ve
tors. The row and the
olumn sums of the
onne
tion
matrix are zero.
The abstra
tion of the pro
ess-relevant universe into systems separated by ide-
alised walls
an be further rened and generalised by re
ognising two limit-
ing
ases, one in whi
h a system is own through with minimal internal re-
ir
ulation and one in whi
h the in-ow and the out-ow is minimal
ompared
to the internal re-
ir
ulation. The limit is in both
ases a redu
tion of minimal
ow to no ow. Figure 2.9 In the rst
ase, one assumes a zero internal re-
ir
ulation whilst in the se
ond
ase one sets the ow a
ross the boundary zero.
The argument is not only an order of magnitude assumption in the ow, but
also in the time s
ale: it is assumed that the dynami
window for the internal
pro
ess is
learly in the short time s
ale,
ompared to the dynami
s of the ows
a
ross the system's boundary.
2.6.1.2.1 Small Internal Re-Cir
ulation, No Rea
tions and Slow Changes
at the Boundary
In this
ase, the stationary and
onstant
ontrol volume is pla
ed in a ow
eld. The modelling is done in a range of the time-s
ale, where the
hanges
at the boundary are very slow, thus one may assume a stationary ow eld,
whi
h has no internal re-
ir
ulation, that is, the
url of the ow eld is zero
((Deen, 1998), (Bird et al., 2001)). This in turn implies that the a
umulation
terms in both the basi
balan
es, the integral balan
e (Equation (2.9)) and the
dierential balan
e (Equation (2.11)) approa
h zero, whi
h is often referred to
as pseudo-steady state.
42 CHAPTER 2. MAPPING THE WORLD
u yp
no internal mixing
u yp
dead time
perfe
t internal mixing
slow
u yi
yi
u
fast
time
onst
time
So the inows balan
e the outows, whi
h mat
hes the expe
tations.
The dierential balan
e (Equation (2.11)) simplies to:
T !T
∂
0 := − δ ϕ̂ (2.83)
∂r
This equation des
ribes an idealised fast transfer system, in whi
h the internal
transport is fast
ompared to the
hanges at the boundary. The transport is a
fun
tion of the state of the system and the state of the
onne
ted system. With
the a
umulation term diasappearing, the resulting set of equations be
ome
algebrai
from whi
h the stationary distribution of the state
an be
omputed
as a fun
tion of the
onditions at the boundaries. Two examples
an be found
in the appendix. The rst is analysing a very
ommon assumption, namely the
heat transfer through a wall Se
tion 9.1.5. The se
ond one is also a heat transfer
pro
ess, but it is fo
using on demonstrating the ee
ts of having more than just
two a
tive boundary pie
es Se
tion 9.1.6.
Substituting the simple isotropi
gradient transport law Equation (2.26), one
2.6. SECONDARY ASSUMPTIONS 43
gets:
T
∂ ∂
0 := − λ π . (2.84)
∂r ∂r
So this is a se
ond-order dierential equation in π . For the transfer to be
omputable, the solution to the se
ond-order dierential equation must exist
((Lin and Segel, 1988), p121). The existen
e of a solution is dis
ussed early
in the literature (Courant et al., 1928). Lin and Segel, though, expressed the
fa
t (Lin and Segel, 1988), p418) that most s
ientist on most o
asions do not
on
ern themselves with the thorny philosophi
al questions that emerge from a
sear
hing examination of what lies at the foundation of their endeavours. ...
The solution forms a hyper-surfa
e with the boundary
ondition dening the
position of this surfa
e. Integrating above equation on
e states that the ux
tensor δ ϕ̂ is
onstant:
∂
δ ϕ̂ := −λ π :=
onst . (2.85)
∂r
Two important lessons are to be drawn from this, namely the fa
ts that
- the state is eliminated and
- there is no time ee
t asso
iated with the transfer
For simple two-a
tive boundary systems su
h as dis
ussed in Se
tion 9.1.5 the
time-s
ale assumption leads to a simpli
ation of the transfer system to a simple
resistan
e, whi
h is what the arrows in the rst pi
ture of the de
omposition
represent Figure 2.7.
2.6.1.2.2 Maximal Internal Flow, Slow Rea
tions and Small, Slow
Flows A
ross the Boundaries
In this
ase one assumes stri
tly no ow a
ross the boundary and maximal
internal ow. Pla
ing the dynami
window into the small time s
ale, where the
rea
tions are slow and thus the turnover very small
ompared to the internal
ows, the dierential balan
e (Equation (2.11)) redu
es to
T !T
∂δΦS ∂
:= − δ ϕ̂ , (2.86)
∂t ∂r
Further assuming that the inow and the outow from the
ontrol volume are
small
ompared to the internal ows, the equilibrium is rea
hed qui
kly. Thus
on the larger time s
ale, the internal fast dynami
s are in equilibrium and no
hange with time is observed:
T !T
∂
0 := − δ ϕ̂ , (2.87)
∂r
Sin
e the inow is negligible in this time s
ale, the system is
losed and the
solution is a
onstant. So the intensive quantity δΦS is
onstant everywhere in
the region.
44 CHAPTER 2. MAPPING THE WORLD
With the
onditions in the
ontents being uniform, we shift time s
ale to a longer
one. Now Equation (2.9) simplies signi
antly: the densities are
onstant ev-
erywhere in the volume thus the volume integrals involving the densities
hange
to the volume times the densities, whi
h is simply the
orresponding extensive
quantity:
dΦS
Z
=− δ ϕ̂T ω dΩ + NST ϕ̃S . (2.88)
dt Ω
Lumping the boundary (Se
tion 2.6.1.1) and assigning the global
o-ordinate,
the equation for the rea
tive, ideally-mixed domain emerges:
dΦS
= FS ϕ̂ + NTS ϕ̃S . (2.89)
dt
This equation des
ribes an idealised
apa
ity, namely a lumped system in whi
h
the generalised densities, that is the extensive quantities normed by the volume,
are
onstant within the
ontrol volume at a time s
ale that is large relative to
the internal mixing.
The de
omposition of the spatial domain the plant and its relevant environ-
ment (Se
tion 2.1) o
upies into
ontrol volumes linked by
onne
tions, yields
a network of
apa
ities and
onne
tions. This
an be depi
ted in the form of
a graph. The nodes of the graph are thus the primitive systems, and the
on-
ne
tions, representing the boundary
onditions of
ontinuous ux, are the ar
s.
Both, the
apa
ities and the ar
s may be typed, meaning they may re
eive a
olour indi
ating the type of fundamental extensive quantity being ne
essary
for the des
ription (energy only,
omponent mass and energy,. . . , for example)
and the
onne
tions the
olour of the transferred type of fundamental extensive
quantity and its form (heat, work,
omponent mass, et
). Su
h a graph is shown
in Figure 2.11.
The graphs are dire
ted in that the arrow introdu
es a referen
e
o-ordinate
system for ea
h ow Figure 2.13. The graph shows thus the intera
tion between
the
apa
ities that were
arved from the overall system together representing
the system's entity. The graph
an be used to depi
t the main of the primary
assumptions asso
iated with mapping the world into equations. These assump-
tions are fundamentally essential for the model, a fa
t that
annot be emphasised
enough. All information asso
iated with the dynami
s is
aptured in the graph.
For example Figure 2.3 we
an introdu
e
ir
les for lumped systems and ellipses
for distributed systems indi
ating the distribution
o-ordinates in the ellipse.
The nature of the
onne
tion depends on if the surfa
e has been lumped or not.
In the
ase of lumped system it is by denition lumped, but in the
ase of the
distributed systems, it depends and thus two dierent
onne
tions must be de-
2.6. SECONDARY ASSUMPTIONS 45
ned, namely one between a lumped and a distributed surfa
e and one between
two distributed surfa
es. Figure 2.3 depi
ts an alternative.
What
an one read from these graphs: Firstly, one
an see on how the plant
was broken down into
ontrol volumes and what type of assumptions had been
made in terms of internal dynami
s of the various
ontrol volumes (lumped,
distributed). One
an also see on what
apa
ity element is talking with whi
h
other one and in what way (lumped surfa
e with lumped surfa
e, lumped surfa
e
with distributed surfa
e and distributed surfa
e with distributed surfa
e). One
may ask the question on why one would have to distinguish between lumped
and distributed systems, if everything really is distributed. This thus ree
ts
the by-the-model-designer-assumed nature of the
ontainment.
2.6.2.1.2 Colouring in
• Rea
tor : Two phases only both
aptured in an individual lump, thus
assuming uniform intensive properties in ea
h of them
• Extra
tion phase: no A, thus spe
ies A is not diusing a
ross the boundary
from the rea
tion phase.
• Rea
tion phase: produ
t C is essentially insoluble and diuses into the
extra
tion phase, where it
on
entrates up.
• Heater, assumed as a simple
apa
ity,
ommuni
ates heat with both phases.
• Sensor: is only in
onta
t with the extra
t phase, as it is the main phase
in the rea
tor.
• Feeds: the two rea
tants
ome from two separate reservoirs. The feed
tanks are thus essentially not modelled.
• Energy sour
e: an innite sour
e of energy, thus not modelled.
• Produ
t tank: a separator not modelled. Would have to be modelled as a
two phase system.
• The energy supplied by the energy reservoir is
onverted into heat in the
heater without any loss.
• The two phases may ex
hange heat (red dotted) and mass (bla
k full).
• The overow is a 2-phase ow.
inow A
inow B
temperature sensor
dropplet phase
rea
tion: A + B → C
negligible C
extra
t phase
no A
two-phase overow
Figure 2.10: The total plant, a 2-phase rea
tor with two feeds, an
overow an ele
tri
al heating/
ooling devi
e equipped with a tem-
perature sensor
a b
Ta Tb
n̂a|R n̂b|E
ŵe|h q̂h|R q̂E|R
e h R E s
n̂E|R q̂E|s
TR TE
n̂R|p n̂E|p
The
oloured dots indi
ate the presen
e of a spe
ies for ea
h
olour. Here red
was used for spe
ies A, green for spe
ies B and green-brown for spe
ies C. The
graph
an be further stru
tured into sub-graphs showing only the mass transfer
network, for example. Those
an be further
oloured to show the domain in
whi
h the individual spe
ies exist, given a set of assumptions about the dire
-
tionality of ow (uni-dire
tional or bi-dire
tional) and the ability of transferring
a spe
ies through a given interfa
e. The latter abstra
ts the semi-permeable
walls of thermodynami
s. In our
ase, the mass transfer between the rea
tion
and extra
t phase does not transfer spe
ies A, whilst spe
ies C is assumed to be
insoluble in the rea
tion phase. These are obviously bla
k/white assumptions:
the spe
ies is transferred or not and the spe
ies is soluble or not. These assump-
tions lead to a simpli
ation of the model, as only the
omponent mass balan
es
for the present spe
ies must be established in ea
h primitive system. Similarly
if a spe
ies is not transferred, no transfer law must be generated. Figure 2.12
spe
ies A spe
ies B spe
ies C
a b a b a b
R E R E R E
P P P
The graph
ontains now all the information to write the model with the ex
ep-
tion of the transfer models and the rea
tion model.
This graph,
ombined with the des
ription of its
omponents represents a net-
work model, whi
h
an be written in a very
ondensed form. The
onstru
tion
of the
ondensed form is demonstrated on a network of minimal dimension that
ontains all
omponents, namely a network of two systems being
onne
ted with
ea
h other. Figure 2.13 shows su
h a network assuming the two involved sys-
tems being lumped and
onne
ted by two
onne
tions ea
h
ommuni
ating an
48 CHAPTER 2. MAPPING THE WORLD
extensive quantity ϕ̂ through the respe
tive
ommon part of the interfa
e. Su
h
a system is des
ribed in Se
tion 2.6.1.2.2 with the result in Equation (2.89)
Ω1
ϕ̂B|Ω
ϕ̂B|Ω 1 |A
1 |A
A B
ϕ̂A|Ω
2 |B ϕ̂A|Ω
2 |B
Ω2
Figure 2.13: Two systems
ommuni
ating through two pie
es of the
ommon boundary.
Figure 2.13 illustrates the notation on an example of two
onne
tions being
dened between the two systems A and B. The arrows indi
ate the dened
dire
tions for ea
h
onne
tion. The ow is then measured relative to these
dire
tions. In the des
ription of the system A, the ow ϕ̂B|Ω |A shows positive
1
in system A and negative in system B and inversely for the se
ond dened ow
ϕ̂A|Ω |B through the se
ond boundary element. The
onservation equations for
2
the two stationary, lumped systems are:
Φ̇ = F ϕ̂ + NT ϕ̃ , (2.94)
The sta
king up
an be done for any number of systems. The result takes always
the form of equation Equation (2.94). The F-matrix is a dire
tion
onne
tion
matrix with diagonal blo
ks as before (Denition 2.78) but over all streams.
The stoi
hiometri
matrix of the
omplete plant :
hh i i
N := diag NS . (2.95)
∀S
is a blo
k matrix with the stoi
hiometri
matrix for ea
h system as the respe
tive
blo
k.
2.6. SECONDARY ASSUMPTIONS 49
The
olour denitions
an be dire
tly used. For the purpose of demonstration
let us use a simple
olouring in whi
h we indi
ated mass, heat, work and heat,
as separate
olours. In the network representation we shall use supers
ripts to
indi
ate the
olour: m for mass, n for
omponent mass where appropriate, q
for heat and w for work. The network representation of a plant des
ribed using
a state
onsisting of
omponent mass and enthalpy, thus assuming
onstant
pressure, reads for the
omponent mass balan
e and the enthalpy of system s:
X
ṅs := αm Im n̂m + NTs ñs , (2.96)
∀m
X X X
Ḣs := αm Ĥm + αq q̂q + αw ŵw . (2.97)
∀m ∀q ∀w
With the ve
tor n̂s being the sta
k of the ows atta
hed to the system s, the
matrix Fns
ontains the non-zero elements of the sth row of the adja
en
y matrix
for the mass-
oloured graph. Allowing for more than one spe
ies, the row is
a set of equivalent rows with one for ea
h spe
ies. Thus the index s. The row
ve
tor f m
s is the s
alar version of Fs as the energy is a s
alar quantity, but
n
asso
iated with the same mass streams. Ea
h enthalpy ow is
al
ulated as a
mixture property of the respe
tive stream. The abstra
tion is done similarly for
the other
olours, namely the heat and the work.
The sta
king
an be extended over the number of systems to obtain the intrigu-
ingly
ompa
t representation:
ṅ := Fn n̂ + NT ñ , (2.100)
Ḣ := m q
F Ĥ(n̂s ) + F q̂ + F ŵ . w
(2.101)
The graph matri
es are now the
omplete adja
en
y matri
es of the respe
tively
oloured sub-graph of the overall representation of the plant in a graph of
apa
ities
onne
ted by the transfer of extensive quantities with the
apa
ities
having the ability to transpose extensive quantity. The stoi
hiometri
matrix of
50 CHAPTER 2. MAPPING THE WORLD
the network is a blo
k diagonal matrix with the stoi
hiometry of ea
h system in
the respe
tive blo
k. For the implementation this representation
an be further
abstra
ted into a global stoi
hiometri
matrix whi
h is index-mapped to form
the blo
k diagonal matrix being used here for simpli
ity of the equations.
The two balan
e equations are normed usually with an extensive quantity that
is not
hanging with time in the given pro
ess. Often this is the volume. Thus
let ϕj be this extensive, time-
onstant quantity, then the following intensive
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 51
rea
tants
a b
ontents
e h R E s
rea
tor
energy
separation
P
rea
tor
rea
tants
ontents s
energy h separation
rea
tor
rea
tants rea
tor
energy separation
ϕ̂e|h ϕ̂h|s
e h s
Figure 2.15: A small
apa
itiy (s)
ommuni
ating with a large
a-
pa
ity (h)
onne
ted to an environment
quantity is dened:
Φi
ξ ij := . (2.104)
ϕj
Applying the transformation to the two
onservation equations observing that
ϕj :=
onstant , one gets:
0 := ϕ̂ih|s , (2.108)
(Figure 2.16) The se ond of the above equations indi ates that the norming
ϕ̂e|h
e h
of the slow equation is not required obtaining the outer solution. It is though
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 53
illustrative for reasons of
omparing the large with the small
apa
ities. The
solution is obtained by simple integration.
Z t
Φ(T ) := ϕ̂ie|h dt + Φ(0) . (2.111)
0
Obviously to exe
ute the integration the model would have to be
ompleted by
adding the des
ription for the transfer ϕ̂ie|h .
The inner solution is obtained by stret
hing the time s
ale (Se
tion 7.5.1.2):
t
τ := . (2.112)
ǫ
and setting ǫ := ϕjs one nds
Z t+∆t
ξ̇sij (t) := ϕ̂ih|s (Φih (t)) dτ ‘ . (2.113)
t
The inner solution has thus the state of the slow system as input whi
h implies
that the fast system approa
hes the
urrent state of the slow system in the short
time s
ale. (Figure 2.17)
ϕ̂h|s
h s
Figure 2.17: The small
apa
ity approa
hes the equilibrium with
the large
apa
ity qui
kly. The large
apa
ity a
ts as a reservoir.
The outer solution is probably more often required than the inner, though with
the growing interest in multi-s
ale systems, interest is
orrespondingly shifting.
Never-the-less, let us have a
loser look at the outer solution for a larger system
next.
On the slow time s
ale the fast system is
oales
ed by the fast one.
On the fast time s
ale, the slow one stands still.
ompared to the relevant dynami
s. Again we start with the sample pro
ess
(Figure 2.15), but in this
ase we state the assumption that
The nature of the extensive quantity is not relevant, thus we also drop the
supers
ript to simplify the notation.
Further for the purpose of illustration let the transfer laws be given by:
where the parameter for the fast transfer is mu
h larger than the parameter
for the slow transfer, thus Θf >> Θs . Dividing the
onservation by the fast
transfer parameter yields:
Θs
Θ−1
f Φ̇h := ∆πf − ∆πs , (2.119)
Θf
Θ−1
f Φ̇s := −∆πf . (2.120)
And
Θs
lim Θ−1
f Φ̇h := lim ∆πf − ∆πs , (2.121)
Θf →∞ Θf →∞ Θf
lim Θ−1
f Φ̇s := lim (∆πf ) . (2.122)
Θf →∞ Θf →∞
Thus
0 := ∆πf . (2.123)
The assumption thus results the equilibrium
ondition for the boundary meaning
that the
onjugate to the potential is equal on the two sides of the boundary.
With the
ondition given in Equation (2.28) the argument holds for a general
transfer law.
Assuming instant transfer results in the two
onne
ted systems being at
equilibrium with respe
t to the for
e driving the instant transfer.
Thus the resulting topology Figure 2.18 is now slightly dierent from Figure 2.16
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 55
ϕ̂e|h
e h+s
ẋ := F x̂ + S x̃ . (2.125)
and the transposition be des
ribed by pairs of rea
tions, one forward, and one
ba
kward. Equilibrium is approa
hed when the forward rea
tion (index f) is
equal to the ba
kward rea
tion (index b) with both rea
tion
onstants being
large. This result is readily obtained by s
aling the
onservation with one of
the two rea
tion
onstants and applying the singular perturbation argument
analogue to the argument used for the fast transfer.
For a pair of forward, ba
kward rea
tions, the S matrix, being the transposed
of the stoi
hiometri
matrix as it is usually dened, in
ludes the two ve
tors of
the stoi
hiometri
oe
ients, thus
(2.126)
S := ν f ν b .
56 CHAPTER 2. MAPPING THE WORLD
The transposition is
x̃ := V K g(y) , (2.127)
where the fun
tion g(y) ree
ts the dependen
y of the transposition on the se
-
ondary state. In the
ase of
hemi
al rea
tions, this fun
tion is usually a power
fun
tion of the involved spe
ies'
on
entration with the respe
tive stoi
hiometri
oe
ient as a power
oe
ient. An extensive quantity usually enters be
ause
the transposition rate is normed by this extensive quantity, often the volume
but it
an also be the area if one talks about an a
tive surfa
e. The matrix K
is a diagonal matrix with the rea
tion
onstants.
As suggested, labelling the forward and ba
kward rea
tion with the index f and
b respe
tively the resulting expression for spe
ies s is νs kb gb (y) + νs kf gf (y).
The
onservation is then of the form:
Sin e
ν b := −ν f , (2.129)
s
aling the equation with one of the two large rea
tion
onstants gives
kb
kf−1 ẋ := kf−1 F x̂ − V ν f gb (y) − gf (y) . (2.130)
kf
Thus taking the limit and observing that the two rea
tion
onstants are of the
same order of magnitude one nds:
−1 −1 kb
lim k ẋ := 0 := lim kf F x̂ − V ν f gb (y) − gf (y) .
kf ,kb →∞ f kf ,kb →∞ kf
(2.131)
Consequently:
kb
0 := gb (y) − gf (y) . (2.132)
kf
assuming that the stoi
hiometri
oe
ient is not equal to zero. With y usually
being the
omposition, this rea
tion-equilibrium equation provides an algebrai
link between the
on
entration of the spe
ies involved in the rea
tion. For
example if the rea
tion is A → B and y := c the ve
tor of
on
entrations and
kf (T ) cγb
:= γBf . (2.135)
kb (T ) cA
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 57
Given the rea
tion, the power
oe
ients are likely to be 1, thus γf := γb := 1.
kf (T ) cB
:= . (2.136)
kb (T ) cA
Interesting is in this
ontext to observe that the non-linearity in the temper-
ature of the rea
tion
onstants
ompensate ea
h other to some extent. Thus
the rea
tion equilibrium is mu
h less a fun
tion of the temperature than the
individual rea
tion
onstants.
The rate adjusts to the equilibrium
ondition, thus is not known and eliminated
from the
onservation equation through a null-spa
e
al
ulation:
Given the
onservation for a system, one splits the rea
tions into a set of fast
and a set of slow rea
tions:
In order to eliminate the fast rea tions one multiplies with a matrix:
with Ω su
h that Ω Sf := 0. The reader should noti
e that this elimination op-
eration, whilst similar, is dierent to the elimination of ows. When eliminating
fast rea
tion, the result is a linear
ombination of spe
ies masses in the ae
ted
system, whilst for the ow elimination, the spe
ies mass of the two
onne
ted
systems are added. The fast rea
tion assumption does not ae
t the hydrauli
s
of the pro
ess, but forms invariant spe
ies groups. A good example for su
h a
system is a a
id-alkali rea
tion, whi
h are very fast
ompared to many other
types of rea
tions [3℄.
with the graphs Fs , Ff , Fsf = −Ff s 11 being the dire
tion matri
es for the slow
internal streams, the fast internal streams and the streams
oupling the fast and
the slow sub-networks.
11 Noti
e the relation between the two graph matri
es
onne
ting the slow and the fast
sub-networks
58 CHAPTER 2. MAPPING THE WORLD
Norming the
onserved extensive quantities with a time-
onstant extensive quan-
tity:
Φi
ξ ij := , (2.141)
ϕj
he
king the validity of the Tihomov
ondition, the fast network redu
es to:
0 := Ff ϕ̂f + Fsf ϕ̂sf + Sf Φ̃f . (2.142)
Looking at the very
ommon
ase, where the fast sub-network has only fast
internal ows and no transposition, the problem redu
es signi
antly:
Φ̇s := Fs ϕ̂s + Ss Φ̃s . (2.146)
The singular perturbation removes the fast state variables from the representa-
tion. No relation between the state of the fast and the state of the slow system
results from this manipulation. Thus if the ex
hange of extensive quantity be-
tween the fast system and the environment is not measured, but only known as
a fun
tion of the state of the fast system and the state of the environment, the
model is not
omplete.
This assumption is often introdu
ed when one knows all the streams in and out
of the fast system ex
ept one, assuming here that all the stream ve
tors have the
same dimensionality as the state. Knowing all the other streams, the steady-
state balan
e equation enables the
omputation of one stream. This argument
is to be adjusted if dimensionalities of the various
onne
tions do not mat
h the
dimensionality of the system's state.
Ω Pa ẋ := Ω Pa F x̂ + Ω Pa S x̃ := 0 . (2.148)
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 59
that is a linear
ombination of the states is
onstant. This then denes k alge-
brai
onstrains providing equations for k dependent algebrai
variables. The
above equations may be used to determine a set of dependent quantities. Bi-
partite graph analysis
an here help to determine the set of possible quantities
that
an be determined in a spe
i
ase. Further, the above equations
an be
added to the other part thereby eliminating the
onne
ting streams, but provid-
ing the opportunity to possibly
ompute quantities that depend on the algebrai
onstraints.
setpoint
manipulate
annot be manipulated
Figure 2.19: Abstra
ting the intera
tion of a plant with its envi-
ronment where one set of intera
tions are free meaning
annot be
ontrolled, whilst the other intera
tion
an be manipulated. The
ontroller gets information from both partners, the environment
and the embedded plant.
inverse being manipulated by the ontroller (being the pa|b see Se tion 2.5.1).
For the
ontroller the term input is used for the observations of the state and
the setpoint information, whilst the output is the manipulated variable ae
ting
the ow between the two systems.
Figure 2.20 shows the blo
k diagram as it evolves from Figure 2.21. Note how
the
ontroller gets the se
ondary state as input and
omputes properties of the
transport equations. The model, whilst linear in transport and transposition, is
nonlinear in the variables the
ontroller manipulates and also not in the state.
Even worse, there are likely impli
it algebrai
equations involved.
2.9 Summary
2.9.1 The Primary Model
2.9.1.1 Network of Distributed Systems
This rst
hapter introdu
ed the
on
ept of a basi
mapping of a physi
al system
into a set of mathemati
al obje
ts that form the basi
model. Mathemati
ally
this representation is a distributed one, meaning that the dynami
system is
not only a fun
tion of time, but also of the spatial
o-ordinate. The fundamen-
tal des
ription is based on the
onservation laws, thus the fundamental state
is formed by the
onserved quantities, the fundamental extensive quantities,
62 CHAPTER 2. MAPPING THE WORLD
ys c(y, ys )
uc ∈ pt
x̂
F t y, pt
t=0
x(0)
ẋ x y
R
·dt 0 := s x, y, ps
x̃
R r y, pr
In a rst step, this des
ription is augmented with two more
omponents, namely
the internal dynami
s, whi
h des
ribe the transposition of one
onserved
quantity into another one, rea
tions for example, and transfer of the extensive
quantities within the system and a
ross its boundaries. Latter are often phase
boundaries and are
hara
terised in dis
ontinuities in a set of intensive variables
whilst the fundamental extensive variables are
ontinuous. Also the derivatives
of the total energy with respe
t to the natural variables (
omponent mass, vol-
ume, entropy) yields the respe
tive
onjugated variables whose gradient repre-
sent the driving for
es for the transfer of the respe
tive fundamental extensive
quantity. Two
onditions: there is no transfer if the driving for
e is
zero and the ow dire
tion is determined by the sign of the driving
for
e: the system tends towards a uniform energy distribution.
2.9. SUMMARY 63
Three assumptions are
ommonly made all of whi
h
onvert to time s
ale as-
sumptions:
• Small vs. large
apa
ities: being usually interested in the longer time
s
ale this leads in a rst instan
e to a event-dynami
assumption for the
fast parts thereby eliminating the state of the fast system. This enables
the
omputation of streams in or out of the fast part for whi
h one has no
model available.
• Fast vs. slow transport: in the short time s
ale this eliminates the slow
transport with the dynami
s
ompletely dominated by the fast transport.
In the slow time-s
ale, the fast transport for
es the lo
al equilibrium for
the quantity that is transported fast. This forms an algebrai
link between
the states of the two fast-
oupled systems requiring an a
ording redu
tion
of the state.
• Fast vs. slow transposition: in the short time s
ale it is the trans-
position that dominates. It usually requires an equally fast supply of the
quantities being transposed into ea
h other. On the long time s
ale a lo
al
equilibrium is a
hieved as the transposition goes both ways. This forms
an algebrai
link between state variables requiring an a
ording redu
tion
of the state.
The model for a single system embedded in its environment takes the form:
dynami
s ẋ := F x̂ + R x̃ , (2.152)
transport x̂ := t(y, ye ) , (2.153)
transposition x̃ := r(y) , (2.154)
state var trans 0 := s(y, x) (2.155)
x̂
F t y, pt
t=0
x(0)
ẋ x y
R
·dt 0 := s x, y, ps
x̃
R r y, pr
Case 2: Some ows are not known, not modelled nor measured.
Case 3: Some
apa
ity ee
ts are negligible: This yields dire
tly the singular
perturbation of the respe
tive a
umulation terms, thus eliminating those states
from the dynami
s. This is the basi
assumption made when mapping a transfer
system into a simple resistan
e.
Case 4: Some se
ondary states are
onstant. This leads to algebrai
onstraints
imposed on the fundamental state and requires a state spa
e redu
tion. Should
always be
leared out before simulating the pro
ess as it is it generates a model
of higher dierential index, whi
h
ause problems when integrating. Su
h prob-
lems
an also o
ur dynami
ally as for example a
ontrol a
tion
an stop the
dynami
s of a part of a system.
Chapter 3
Approximating Distributed
Systems
67
68 CHAPTER 3. APPROXIMATING DISTRIBUTED SYSTEMS
The grid, whilst often of
onstant grid width, does not have to be
onstant. In
fa
t looking at the estimation errors, it is interesting to adapt the grid to the
hanging fun
tion being approximated, thus distribution and dynami
s.
Sin
e the purpose is to introdu
e and explain the
ore idea and ba
kground of
the method, the simplest
ase is
hosen, namely a one-dimensional problem and
a
onstant grid. For the purpose of demonstration, let x be a state and r a s
alar
independent variable, thus dxdr is a s
alar rst derivative of x with respe
t to r
d2 x
and dr2 be the
orresponding 2nd derivative. Let further rk denote k th point
in the one-dimensional grid. Having the obje
tive to approximate 2-nd order
derivatives, the minimal number of approximation points is three. A generi
set of points is dened labelling the three points with the subs
ript 0,1,2 with 0
indi
ating the point k-1, 1 the point k, and 2 the point k+1. In ea
h point the
state fun
tion
an be extended in a Taylor series:
n
1 ∂ i x ∂ n+1 x n+1
X 1
x(rk + h) := h i
+ h . (3.1)
i:=0
i! ∂ri rk (n + 1)! ∂rn+1 ξ
1 ∂ 2 x
∂x
(−h)2 + . . . , (3.2)
x0 := x1 + ∂r r1 (−h) +
2 ∂r2 r1
1 ∂ 2 x
∂x
(−2h)2 + . . . . (3.3)
x0 := x2 + ∂r r2 (−2h) +
2 ∂r2 r2
1 ∂ 2 x
∂x
h2 + . . . , (3.4)
x1 := x0 + ∂r r h +
0 2 ∂r2 r0
1 ∂ 2 x
∂x
(−h)2 + . . . , (3.5)
x1 := x2 + ∂r r2 (−h) +
2 ∂r2 r2
1 ∂ 2 x
∂x
2h2 + . . . , (3.6)
x2 := x0 + ∂r r0 2h +
2 ∂r2 r0
1 ∂ 2 x
:= x1 + ∂x h2 + . . . , (3.7)
x2 h
∂r r1 +
2 ∂r2 r1
and Equation (3.7) and ignoring the error terms for the time being :
∂x
x0 − x2 := −2 h, (3.8)
∂r r1
∂x x0 − x2
:= − . (3.9)
∂r
r1 2h
∂ 2 x
x0 − 2x1 + x2
:= . (3.10)
∂r2 r1 h2
For the se
ond derivative trun
ation only o
urs at the 4-th order term:
!
h4 ∂ 4 x ∂ 4 x
2
O(h ) := − + , (3.14)
4! h2 ∂r4 ξ01 ∂r4 ξ21
h2 ∂ 4 x
:= − . (3.15)
12 ∂r4 ξ02
The pair Equation (3.4) and Equation (3.6) yields the two approximations for
the derivatives at r0 and nally the pair Equation (3.3) and Equation (3.5) gives
the two at r2 .
The following table lists all the three point approximations:
Derivative Approximation Error Estimate
∂x 1 h2 ∂ 3 x
∂r r0 2h (−3x0 + 4x1 − x2 ) 3 ∂r ξ3
2 3
∂x 1
− h6 ∂∂rx3
∂r r1 2h (−x0 + x2 )
2 3
ξ
∂x 1
− h3 ∂∂rx3
∂r r2 2h (x0 − 4x1 + 3x2 )
ξ
∂2 x 1 3 2 4
2
∂r h2 (x0 − 2x1 + x2 ) −h ∂∂rx3 + h6 ∂∂rx4
r0 ξ1 ξ2
∂2 x 1 2 4
2
∂r h2 (x0 − 2x1 + x2 ) − h12 ∂∂rx4
r1 ξ
∂2 x 1 3 2 4
2
∂r h2 (x0 − 2x1 + x2 ) −h ∂∂rx3 + h6 ∂∂rx4
r2 ξ1 ξ2
70 CHAPTER 3. APPROXIMATING DISTRIBUTED SYSTEMS
2 4
ξ
∂x 1
− h12 ∂∂rx4
∂r r1 6 h (−2x0 − 3x1 + 6x2 − x3 )
2 4
ξ
∂x 1
− h12 ∂∂rx4
∂r r2 6 h (+x0 − 6x1 + 3x2 + 2x3 )
2 4
ξ
∂x 1
− h4 ∂∂rx4
∂r r3 6 h (−2x0 + 9x1 − 18x2 + 11x3 )
ξ
∂2x 1 2
∂4x h3 ∂ 5 x
∂r 2 6h2 (+12x0 − 30x1 + 24x2 − 6x3 ) − 11h
10 ∂r 4 − 5
ξ1 123 ∂r5 ξ2
r0
∂2x 1 2 4
1 1 2
1 1 2 2
-4 -4
-4
1 1
The s
hemes have many variations primarily be
ause of mat
hing the re
tan-
gular grid to the geometry of the problem. The grid does also not need to be
onstant or re
tangular, parti
ularly in
omputational uid me
hani
s, the grid
is being adapted to mat
h the
hanging stream lines and in some appli
ations,
the use of triangular grids is of advantage.
Chapter 4
Control and system theory builds heavily on a system representation that
ame
to the lime-light in the 1950ties when system theory evolved establishing itself
in the de
ade thereafter. The developments around Kalman
ulminated in the
elebrated paper Kalman (1963), whi
h sets a lot of the overall framework. In-
teresting is also Zadeh's paper Zadeh (1962) that ree
ts the shift in philosophy
from
ir
uits to systems. Not at least, the developments involved a dis
ussion
of the term state, as for most appli
ations an input/ouput representation is suf-
ient and the whole of the new theory was
onstru
ted around the state. The
ore of the des
ription is a state-spa
e representation of linear, time-invariant
systems. Books were written on the subje
t, for example Kailath (1980) and
Chen (1984). Today, linear, time-invariant systems are a
ore part of
ontrol.
Whilst most systems are non-linear, the understanding of linear systems is essen-
tial and provides important insights and
onsequently guidelines for nonlinear
systems.
The attribute linearity applies to the state x, the input u on the right-hand-side
and the time derivative of the state and the output y on the left-hand-side. The
term time-invariant states that the four matri
es A, B, C, D are not a fun
tion
of time.
71
72 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
The rst part of the solution ree
ts the impa
t of the initial
onditions, whilst
the se
ond part, the integral, is the
onvolution of the input with the impulse
response being the matrix Φ whi
h is
alled the fundamental matrix. It is the
exponential of the system matrix A with the respe
tive time argument:
Φ(t) := eA t , (4.4)
Λt
:= Ve V −1
. (4.5)
With V, Λ being the eigenve
tor and the eigenvalue matrix, respe
tively. The
exponential of the eigenvalue matrix is dened as being the diagonal matrix of
the exponential of the eigenvalues and the time argument and zero otherwise.
With the
omputers being time-dis
rete units, the sampled data systems have
ome essential for operations of a plant. The most
ommon version assumes
that the sampling is instantaneous. The sampler represents the analogue/digital
unit in the pro
ess interfa
e and the dis
rete input is
onne
ted to a zero-order-
hold unit representing the digital/analogue
onversion unit. In most
ases, the
sampling rate is
onstant and behaviour of the sampled data system
an be
readily
omputed from the solution to the
ontinuous system by integrating
re
ursively over a time interval for a given input:
Z t+∆t
x(t + ∆t) := Φ(∆t) x(t) + Φ(t − τ ) B u(τ ) dτ . (4.6)
t
Using the notation of k for t = k ∆t and with u(τ ) := u(k) thus being held
on-
stant by a zero-order-hold (ZOH) element during ea
h time period, the integral
an be simplied:
Z ∆t
x(k + 1) := Φ(∆t) x(k) + Φ(τ ) dτ B u(k) , (4.7)
0
Z ∆t !
:= Φ(∆t) x(k) + Φ(τ ) dτ B u(k) , (4.8)
0
Also the
ase of a singular matrix A is not di
ult to treat. A variable trans-
formation on the integral yields
Z tk+1 Z ∆t
′
eA (tk −τ ) dτ := V eΛ τ V−1 dτ ′ , (4.12)
tk 0
Z ∆t
′
:= V eΛ τ dτ ′ V−1 , (4.13)
0
R ∆t
0
eλi τ dτ ; λi 6= 0 −1
:= V V , (4.14)
R ∆t
0 1dτ ; λi = 0
∀i
1
∆t
λi eλi τ 0 ; λi 6= 0 −1
:= V V , (4.15)
∆t
τ |0 ; λi = 0
∀i
1 λi ∆t
λi e − 1 ; λi 6= 0 −1
:= V V . (4.16)
∆t ; λi = 0
∀i
The use of operators is often
onvenient be
ause it allows
ondensing the equa-
tions and thus redu
es writing at the same time providing a better overview of
the problem being treated. Operator
al
ulus is widely used in manipulating
dierential and dieren
e equations with
onstant
oe
ients. For dierential
operations, a dierential operator is dened1 . For dieren
e equations a shift
operator is dened.
Let {f (k) | k := . . . , −1, 0, 1, . . . } be a two-sided, innite sequen
e of data points
representing the dis
rete signal, the forward shift operator, q , is then dened
by :
q f (k) := f (k + 1) (4.18)
and the ba
kward shift operator, q −1 , is dened by the relation :
q −1 f (k) := f (k − 1) (4.19)
Note : The shift operators are bounded, in fa
t their norm is 1. This is not
the
ase for their
ounterpart the dierential operator. It is unbounded. This is
one of the reasons why dieren
e
al
ulus is simpler than dierential
al
ulus.
1 Frequently used symbols are D, d and p.
74 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
Forward shift operators are
onvenient when dis
ussing stability and order of a
system. The ba
kward shift operator is handy in problems related to
ausality.
The manipulation of dieren
e equations is illustrated on the following example :
with
C := 1 (4.35)
Φ := a (4.36)
Γ := b (4.37)
k
X
y(k) = ak x(0) + ai−1 b u(k − i) (4.38)
i:=1
Note : The shift operator plays the same role in the dieren
e
al
ulus
as does the dierential operator in the dierential
al
ulus. It must therefore
be well distinguished from the z operator, whi
h is used in the so-
alled z-
transformation. The z-transformation is the analogue to the s-transformation.
CO
u y
C O
B, A B, . . . , An B . (4.39)
Kc :=
ys
yP
C
yE
u
U
x̂U|P
E P
R
x̂R|P
of the available driving for
e, being the dieren
e between the two
onne
ting
points U and P . The
onsequen
e of this analysis is that there exist two dierent
lasses of variables that ae
t the movement of the state of the plant, whi
h is
the state of the
onne
ted systems in the environment providing the potential
driving for
e and the se
ond one being the resistan
e of the valve manipulated
by the
ontroller. It should be noted, that only one of the two is a
tually
manipulated, so one
ould
onsider the state of the environment
omponents
to enter as disturban
es or loads, depending on the view point one takes. For
the reason of lifting out the dieren
e, we shall just do the last and label the
resistan
e
Assuming some limited
onne
tivity and only rea
tions taking pla
e in P , the
model
ould then read like:
With dU := LU yU and dR := LR yR
ompletes the model. The two matrixes LU and LR sele
t a sub-ve
tor. The
ontrol a
tion u is most
onveniently normed between [0, 1] with the
hara
-
teristi
s of the
onne
tion being
aptured in the parameter ve
tor Θ for the
respe
tive
onne
tion.
Aiming at an ABCD des
ription for the plant, we need to linearised around a
steady state point.
Substitution yields:
Isolating x yields:
−1
(4.53)
x(s) = sI − A x(0) + B u(s) ,
y := C x(s) + D u(s) . (4.54)
and
−1
(4.55)
y(s) = C sI − A x(0) + B u(s) + D u(s) ,
−1 −1
:= C s I − A x(0) + C s I − A B + D u(s) . (4.56)
The solution in the Lapla
e domain has two
omponents. The rst is
−1
C sI − A x(0),
whi
h ree
ts the ee
t of the initial
onditions over time, whilst the se
ond
part, namely
−1
C sI− A B + D u(s)
represents the ee
t of the inputs.
The matrix
−1
G(s) := C sI− A B+D , (4.57)
is
alled the transfer fun
tion matrix and represents the pure input/output
behaviour of the modelled plant.
To understand the transfer fun
tion matrix better, it is illustrative to rewrite
the inverse as the fra
tion of the adjoint and the determinant:
!
adj s I − A
G(s) := C B + D. (4.58)
| sI − A |
and thus the roots are the eigenvalues of A. The matrix adj s I − A is a
Ea
h of the individual transfer fun
tions in the transfer fun
tion matrix
an be
written a ratio of two polynomials in s, whi
h is being derived in Se
tion 4.3.2:
Bi,j (s)
G(s) := . (4.60)
A(s)
The numerator polynomial Bi,j (s) varies with i, j , whilst the enumerator poly-
nomial is always the same. The dynami
properties are thus
hara
terised by
the roots of the Bi,j (s) polynomial, the zeros and the roots of the
hara
teristi
polynomial A(s), thus the eigenvalues of the matrix A, the poles. It is the poles
that determine stability. The system is stable if all the poles are in the left-half
plane Chapter 5.
80 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
where |g(s)| the absolute value of g(s) and ϕ the phase angle. For n signal
operation blo
ks in series as indi
ated in the gure below the overall transfer
g1 g2 g3 gn
fun tion is related to the transfer fun tions of the individual blo ks by
thus
g(s) = |gn (s)| |gn−1 (s)| . . . |g1 (s)| ei(ϕn +ϕn−1 + ... +ϕ1 ) (4.63)
and thus the roots are the eigenvalues of A, whi
h are also
alled the poles of
the respe
tive transfer fun
tion.
4.3. FREQUENCY DOMAIN REPRESENTATION 81
with |Q | being the determinant of the ij -minor obtained by deleting the ith
ij
row and the j th
olumn of the matrix Q. Determinants
an be re
ursively
expanded into weighted sums of sub-determinants. With Q := |I s − A|, the
ofa
tors be
ome polynomials in s. The individual transfer fun
tions in G is a
ratio of two polynomials, namely
thus
adj(A)
A−1 = (4.72)
|A|
3 moni
polynomial :: a polynomial with the leading (highest power)
oe
ient being 1
4 Note: k is not the steady state gain, but the gain whereby the rest of the transfer fun
tion
has a gain of 1. For stable pro
esses, the gain is identi
al to the steady state gain. (see also
se
tion Se
tion 4.3.2.3)
82 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
The indexes i, and j run over the set of dierent roots, whilst mi indi
ate the
number of equal real roots, nj the number of equal
onjugate-
omplex pairs of
roots and l the number of roots that are zero.
The roots of the numerator are
alled zeros. The term zero ree
ts the fa
t
that an input with the frequen
y of the zero is
ompletely absorbed by the
system. The roots of the denominator polynomial are the poles, whi
h are the
frequen
ies where the transfer fun
tion approa
hes innity and thus show a pole
in the graphi
al representation of the transfer fun
tion.
Polynomials in s are the most
ommon form of transfer fun
tions. They are the
Lapla
e transform of SISO systems des
ribed by ordinary dierential equations
in the time domain. Many systems t into this
lass as be
omes apparent when
modelling physi
al-
hemi
al systems.
Another important
lass of transfer fun
tions derives from models des
ribing
transfer lags, that is, dierential equations, whi
h in
lude terms of the type
x(t − τd ) (4.78)
with τd being the dead time. The Lapla e transform of this term is
A simple model of a ow through a pipe is a plug ow assuming that a at front
moves through the pipe. This model negle
ts any fri
tion ee
ts both, fri
tion
of the uid on the wall and uid-internal fri
tion. The model for the pipe is
then simply
where τd is the length of the pipe divided by the stream velo ity.
It is often easier to assess the
ontents of data qui
ker and more
omprehensive
from a graphi
al representation. This also applies to transfer fun
tions. Sin
e
these are
omplex fun
tionals, one
an either represent the magnitude and the
argument of the
omplex number in a separate plot, whi
h results essentially
in the Bode plot or one uses a polar representation, whi
h is the Nyquist plot.
These are the most
ommonly used two representations. Only that the Bode plot
uses a log-log representation for the amplitude and a semi-log representation for
4.3. FREQUENCY DOMAIN REPRESENTATION 83
the phase. The
hoi
e of s
ales supports the fa
torisation of transfer fun
tions
as it is dis
ussed in the se
tion Transfer Fun
tions Are Complex.
In the
ase of a single-input-single-output (SISO) system with the transfer fun
-
tion g(s) the amplitude ratio and the phase are given by:
Table 4.1: Computation of the approximate Amplitude Plot for dierent ele-
mentary transfer fun
tions
no root : K >0 0o −
Table 4.2: Computation of the approximate Phase Plot for dierent elementary
transfer fun
tions
The rules appli
able to
omplex numbers Equation 4.65 and 4.66 des
ribe how
omposite transfer fun
tions are
onstru
ted from the individual fa
tors. It is
thus su
ient to provide the approximations for the basi
omponents, whi
h
are
onstru
ted from asymptotes. The
onstru
tion of the asymptotes for the
basi
transfer fun
tions are shown in the two tables 4.1 for the magnitude plot
and 4.2 for the phase plots. Example plots are given below, whi
h are also
referen
ed in the table 4.1. Noti
e that the gain K is the steady-state gain,
if the transfer fun
tion has no zero poles. For the phase of a
onstant transfer
fun
tion, the
onvention is used as indi
ated before. It is
onstant, either at 0
or −180 depending on the sign of steady-state gain. The phase for zero-root
transfer fun
tions is also a
onstant (or a step that o
urs at negative innity).
4.3. FREQUENCY DOMAIN REPRESENTATION 85
Whilst the other primitive transfer fun
tions, the simplest approximation for
the other basi
transfer fun
tions phase is a step. The step o
urs at the
orner
frequen
y, whi
h is the interse
tion of the low-frequen
y and the high-frequen
y
asymptotes in the amplitude plot. The sign of the root determines the dire
tion
of the step.
• Make polynomials moni
: fa
tor su
h that the leading
oe
ient is 1, with
the leading
oe
ient being the one of the highest order term.
• Find the roots of the two polynomials and write polynomials in produ
t
form.
• Fa
torise into primitive polynomials:
onstant, zero roots, real roots, pairs
of
omplex and
onjugate
omplex roots.
The amplitude of the transfer fun
tion |k| shows as a horizontal line in the Bode
plots (Figure 4.4) as k := 2.
For the phase, the sign of the fa
tor, also
alled the gain or the steady-state
gain, determines the value of the
onstant phase. For a realisable system, the
phase is 0 for k > 0 and −180o for k < 0, by
onvention.
One Zero Root : The two
ases s and 1/s are important. Both appear
frequently as
omponents in polynomial transfer fun
tions. The rst is a dif-
ferential operation, an unbounded operation, and the se
ond a pure integrator.
Pure dierential operators are not realisable, whilst integrators are
ommon
elements on nds in plants. For example a tank with inlets and outlets is an
integrator in terms of mass. The Bode plot of the is shown in Figure Figure 4.5.
86 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
0.25
0.2
0.15
0.1
0.05
0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
1.5
0.5
Phase [Pi]
−0.5
−1
−1.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
−1
−2
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
0.5
Phase [Pi]
−0.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
1.5
0.5
0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
0.5
Phase [Pi]
−0.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
1.5
0.5
0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
0.5
0
Phase [Pi]
−0.5
−1
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
One Real Root Only : The simplest version of rst-order systems has one
root only. For example, a SISO rst-order system of this type is:
ẋ = −1/τ (x + u) (4.89)
has the amplitude and the phase
1
|g(s)| = √ (4.90)
τ 2 ω2 + 1
ϕ = tg −1 (−τ ω) (4.91)
Figure Figure 4.6 shows the
ase where the root is negative, here −1 and Fig-
ure 4.7 has a root of 1. Note that the amplitude is not ae
ted by the sign
hange, whilst the phase is
hanging sign.
Complex
onjugate roots : Se
ond-order systems are the simplest that may
exhibit os
illatory behaviour. Os
illations are
hara
terised by
omplex roots
of the denominator polynomial. Complex roots always appear in pairs, the
onjugate
omplex pairs. Sin
e the os
illating behaviour is of so mu
h interest,
it is
ommon pra
ti
e to parameterise se
ond-order system in a spe
ial way,
whi
h is :
1
g(s) := 2ξ 1
(4.92)
1+ ωn s+ ωn2 s2
ξ is the damping fa
tor and ωn is the
riti
al frequen
y. For the damping fa
tor
in the range of 0 < ξ < 1 the roots are
omplex. Outside this interval, the
roots are real. The latter
ase
an be redu
ed to the produ
t of two rst-order
systems with real roots, a
ase, whi
h was dis
ussed above. Figure 4.8 shows
the Bode plots for a frequen
y normalised with the
riti
al frequen
y.
Pure delay - a dead-time of 1 : The transfer fun
tion of the dead time
element was introdu
ed in Equation 4.79. The Bode plot is shown in Figure 4.9
Approximate Bode Plot of Composite System :
0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
0.5
0
Phase [Pi]
−0.5
−1
−1.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
pure delay of 1
1
Amplitude Ratio :: log10|g|
0.5
−0.5
−1
−1 0 1
10 10 10
Frequency :: log10(ω) [rad/s]
−100
−200
Phase [Pi]
−300
−400
−500
−600
−1 0 1
10 10 10
mm Frequency :: log10(ω) [rad/s]
1 10 (1/10 s + 1) (s + 1)
g(s) := (4.96)
50 0.2 0.1 s (1/0.2 s + 1) (1/0.1 s + 1)
(1/10 s + 1) (s + 1)
:= 10 (4.97)
s (1/0.2 s + 1) (1/0.1 s + 1)
Rewriting this expression again, lets the omponents stand out learly :
Adding the following rules as additional ingredients, the sket
hing of the ap-
proximate Bode plots is done in a jiy (see Figure Figure 4.10).
−1
−2
−3
−4
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]
0.5
0
Phase [Pi]
−0.5
−1
−1.5
−2
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]
4.3.2.3.3 De ibels
De
ibels are often used as units in the amplitude plots. The amplitude a in
de
ibel in terms of the amplitude A is dened by
Systems with all poles and zeros in the left half plane are minimal phase systems,
whereas systems with poles or zeros in the right half plane are non-minimal phase
systems.
92 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
Chapter 5
Stability
has been abandoned for de
ades. Sin
e the Fifties, however, more and more
appli
ations appeared taking advantage of the very general nature of the theory
93
94 CHAPTER 5. STABILITY
whi
h makes it also appli
able to nonlinear system while most other te
hniques
are only appli
able to linear systems.
The
on
ept of stability
an be ni
ely illustrated by a well known physi
al
system : a ball in the gravity eld. Three dierent equilibrium positions
an be
identied for this system (Figure Figure 5.1). A slight mispla
ement of the ball
will result in
1. os
illation around the equilibrium state
2. no
hange in the equilibrium state
3. divergen
e, the ball drops down away from its
urrent state of equilibrium.
Thus the rst two states are stable, where the last one is not.
Extending the model further by in
luding fri
tion, the ball in the pot displa
ed
from its equilibrium position returns to the equilibrium position either by a
damped os
illation or no os
illation at all. Su
h a system is
alled asymptoti
ally
stable.
Stability is in general a lo
al property whi
h therefore must be investigated over
the whole range of appli
ation. In terms of our physi
al system, a mountain
with a dip at the top is an example for a lo
ally stable system. The
lass
of linear systems is an important ex
eption. For them, lo
al stability always
implies global stability too as shall be shown later.
5
1 : globally asymptoti
ally stable
1 2 : globally stable
3 : stable
4 : asymptoti
ally stable
5 : unstable
ǫ
xe : equilibrium point
δi : limit for initial perturbation
δi
ǫ : desired operating domain
xe
4
3
2
This solution of the unfor
ed system approa
hes the equilibrium state, whi
h is
0, only if all Eigenvalues are less than zero. Thus the stability requirement is
The proof is trivial: eλi t for any t > 0 and ℜ(λi ) > 0 in
reases without limit as
t in
reases. As V spans a spa
e dierent from the null spa
e the solution tends
to innity.
For time-invariant, linear systems, stability automati
ally implies global stabil-
ity as the above is a stable system will
onverge to the equilibrium point 0 for
any initial
ondition x(0). This statement is proven again below using the dire
t
method of Liapunov.
ẋ = f ( x) with f ( 0) = 0 (5.6)
Thus
n 2 " #
X 2 2
v̇( x̃) = − 2 ℜ(λi ) ℜ(x̃i ) + ℑ(x̃i ) (5.17)
i=1
Thus for any x̃i within the neighbourhood of Ω of the equilibrium state x̃e = 0,
a
ording to Theorem 1 :
4a) v̇( x) ≤ 0 ⇒ origin is stable
4b) v̇( x) < 0 ; x 6= 0 ⇒ origin is asymptoti
ally stable if in addition
v( x̃) > 0 whi
h is the
ase if ℜ(λi ) < 0 ; ∀ i the system is even globally
asymptoti
ally stable be
ause
v( x) → ∞ as || x|| → ∞ (5.18)
Thus for || x̃|| → ∞ ||v( x̃)|| → ∞. A
ording to Theorem 2. the origin x̃e is
globally asymptoti
ally stable.
On the other hand, if only one eigenvalue assumes a positive real part, v( x̃)
is indenite and sin
e v̇( x̃) remains negative semidenite, the system will be
unstable.
Choosing α su
iently large, the whole domain is in
luded and any initial
on-
dition satises the
ondition ⇒ the system is globally asymptoti
ally stable.
ẋ = Ax + bu ; x(0) = x0 (5.24)
y = T
c x (5.25)
The rst term depends on the initial
onditions only and was subje
t to the
stability dis
ussion in the previous se
tion.
Sin
e the transfer fun
tion is given by
The poles are identi
al with the eigenvalues of A. Therefore as system is BIBO
stable if it is asymptoti
ally stable.
Note : the opposite is not ne
essarily true. BIBO stability does not imply
asymptoti
stability be
ause BIBO stability a
ounts only for the observable
and
ontrollable parts of the system.
Note : the Routh-Hurwitz method, whi
h is based on a
ontinuous fra
tion
expansion,
an be used for testing the lo
ation of the poles whi
h are the roots
of the denominator polynomial in the transfer fun
tion.
ẋ = A x (5.28)
v ( x) = xT P x (5.29)
100 CHAPTER 5. STABILITY
. Based on this derivation, a three step algorithm for testing a linear, time-
invariant system
an be derived :
1. Choose an arbitrary symmetri
matrix Q > 0 for example Q := I.
2. Cal
ulate the elements of the also symmetri
matrix P element by ele-
ment from the equation. Q := −(AT P + P A) > 0.
3. Test for positive deniteness of P → Silvester Theorem
Theorem 5 (Silvester). A symmetri
matrix is positive denite if all its leading
major minors are > 0.
(5.35)
s
heme
an be utilised for testing stability without expli
itly
al
ulating the
roots of the denominator polynomial.
The pro
edure goes as follows :
1. Write the denominator polynomial in s in the form an sn + an−1 sn−1 +
· · · + a1 s + a0 = 0 where ai ∈ R for all i and an > 0.
2. If for any of the
oe
ients ai in the polynomial the
ondition ai ≤ 0
holds, then at least one root is inside the right half plane; thus the system
is not stable.
3. Theorem : A polynomial has all its roots in the left half plain i all αi > 0
in the re
ursive s
heme shown below for the example of a polynomial of
7th order :
an
α1 := an−1 | an an−2 an−4 an−6
an−1
α2 := bn−2 | an−1 an−3 an−5 an−7
α3 := bcn−2
n−3
| bn−2 bn−4 bn−6
..
. | cn−3 bn−5 bn−7
.. ..
. .
g1
α7 := h0 | g1 0
where
an−1 an−2 − an an−3
bn−2 := (5.36)
an−1
an−1 an−4 − an an−5
bn−4 := (5.37)
an−1
..
. (5.38)
bn−2 an−3 − an−1 bn−4
cn−3 := (5.39)
bn−2
Theorem 6 (Routh). The number of roots of P (λ) in the RHP is equal to the
number of sign
hanges in the se
ond
olumn of Routh's s
heme (an , an−1 , bn−2 , cn−3 , . . . ).
Note : Only the
hange of sign in this
olumn is important, the
al
ulation of
the rst row
an therefore be omitted.
Note : The Hurwitz'
riterion is
losely related to Routh's
riterion, be
ause
the elements in the so-
alled Hurwitz matrix are
al
ulated by the same rules
as the elements in the Routh s
heme.
Note : If one of the
oe
ients (an , an−1 , . . . ) in Routh's s
heme be
omes zero,
this element is repla
ed by an arbitrarily small number ε. All
onsequent
oe-
ients are then a fun
tion of ε. After the whole s
heme has been
al
ulated, the
limits of all
oe
ients in the se
ond row that are a fun
tion of ε are examined
for a
hange of sign by
hanging ε → 0+ and ε → 0− . Note though, that su
h
a zero also indi
ates that the system has an eigenvalue on the imaginary axis,
that is on the stability boundary.
102 CHAPTER 5. STABILITY
Let f (s) be an analyti
meromorphi
2 fun
tion in a region R, that is f (s) := B(s)
AS
with z being the number of zeros and p the number of poles, en
losed by a
ontour C en
ir
ling all poles and zeros in the Euler plane in , then
1 f ′ (σ) dσ
I
z − p := (5.40)
2 π i C f (σ)
This assumes that the
ontour is oriented
ounter-
lo
kwise and simple, that is
without self-interse
tions.3
More generally, suppose ω is a
urve, oriented
ounter-
lo
kwise, then
I ′ !
1 f (σ) dσ X X
:= 2 π i n(ω, z) − n(ω, p) (5.41)
2 π i ω f (σ) z p
where the fun
tion n(ω, k) is the winding number of ω around the point k .
The
onsequen
es are tha the winding number n about the origin for a
losed
ontour ω
entred on the origin is
n := z−p (5.42)
The stability
riterion is derived from using Cau
hy's prin
iple of arguments by
drawing up a
ontur for the
omplete right-half plane Figure 5.4. In doing so,
the overall transfer fun
tion is split into two parts, namely the stable one and
the unstable one. This is possible be
ause the polynomial transfer fun
tions
an readily be fa
torised a
ordingly. A little problem arises from the fa
t that
points at the stability limit are quite
ommon, as these are the zero polses, the
property that goes with pure integrators, a
ommon element in pro
ess models.
This problem
an though handled quite easily: Sin
e the
ontrour must be
analyti
al at every point, poles on the imaginary axis are avoided by innitely
small semi-
ir
les.
2 Meormorphi
is a
on
ept dened in the framework of
omplex analysis. In simple words,
a meromorphi
fun
tion is the ratio of two well-behaved fun
tions. This ration is well-behaved
itself ex
ept than at spe
ial points, where the denominator approa
hes zero and the ratio has
poles. Thus polynomial transfer fun
tions are ex
usite examples for su
h fun
tions.
3 http://en.wikipedia.org/wiki/Argument_prin
iple
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 103
ℑ(s)
s-plane
R→∞
ℜ(s)
r→0
Applying Cau
hy's prin
iple of arguments gives the desired result: It states, that
the number of unstable poles of the
losed-loop system is equal to the number of
unstable poles of the open loop system plus the number of en
ir
lements of the
origin of the Nyquist plot of the
omplex fun
tion 1 + P C with P, C being the
transfer fun
tions of the plant and the
ontroller, respe
tively. Most
ommonly
this is modied by not arguing the fun
tion 1 + P C , but P C , thus the open
loop transfer fun
tion but now with en
ir
lements of the point −1 instead of
the origin. The zeros 1 + P C are the poles of the
losed-loop system, whilst
the poles are the poles of the open loop system and the zeros of the
losed loop
system:
P B
C B BP BC
PC
S := := APBAPCBC := AP
AP AC
AC +BP BC
(5.43)
1 + PC 1 + AP AC AP AC
Very often physi
al plants are stable and have thus no poles in the right-half-
plane. As a
onsequen
e, the above statement simplies to that the -1 point
must not be en
ir
led.
the equation 1 + P (s) C(s) = 0, whi
h implies that P (s) C(s) = −1. Thus s
assumes the value of a root, the phase angle is −180o .
The distan
e of the
ross-over of the open-loop transfer fun
tion P (s) C(s) to
the −1, gives a measure on how mu
h the
ontrolled system is away from the
stability limit. Obviously, in the
ase of stable plants this
ross over o
urs
between the −1 point and the origin. The distan
e on the real axis is
alled
the gain margin. Equaly one
an measure the angle being the argument of the
open-loop transfer fun
tion, at the point where the magnitude of the transfer
fun
tion is 1. This is
alled the phase margin.
ℜ(s)
−1
Usually the two measures are shown in the Bode plots. For example, given the
plant:
1
P := (5.44)
(0.5 s + 1) (0.8 s + 1) (0.1 s + 1)
C ∈ {5, 10, 50} (5.45)
the Bode plots show the three transfer fun
tions, one for ea
h P-
ontroller, with
the stability margins Figure 5.5.
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 105
40
20
0
Magnitude (dB)
−20
−40
−60
−80
0
prop gain :: 5
prop gain :: 10
prop gain :: 50
−90
Phase (deg)
−180
−270
−1 0 1 2
10 10 10 10
Frequency (rad/sec)
There is really only one purpose for system identi
ation, and that is to nd
an appropriate model for the modelled plant limited to the range of operating
onditions in whi
h the identi
ation experiments
an be performed.
So why do these models have to be mat
hed with, and what about these operat-
ing
onditions? Mat
hing is ne
essary be
ause the model is not a pre
ise image
of the plant and it is not always su
h that all the information about the plant's
behaviour is known in all details thought some of them may be ne
essary to be
in
luded in the model in order to meet the spe
i
ation one has dened for the
use of the model. Thus system identi
ation is done to nd a model des
ribing
the pro
ess on the level of details required for the appli
ation of the model.
Appli
ation
an be anything from just trying to understand the behaviour of
the system, to using it for design and operational tasks su
h as
ontrol.
What about the operating
onditions? Plants, or any system for that matter,
must be disturbed, ex
ited, as the spe
ialist
alls it, in order for the pro
ess to
reveal his behaviour. For example, in order to nd out on how heavy something
is, one has to a
elerate it or expose it to a gravitational eld. The same for any
other pro
ess: it must be moved about in order to test out its behaviour. For
the purpose of identi
ation one thus inje
ts a well-
ontrolled disturban
e, an
ex
itation signal, that moves the pro
ess, that is,
hanges its state. The model is
then fed with the same ex
itation signal and the behaviour of the plant and the
simulated pro
ess is
ompared on the basis of whi
h the model is
hanged. The
model is
hanged until its behaviour ts satisfa
torily within the plant's range
of operation, whereby satisfa
torily is determined by introdu
ing a measure for
the dieren
e between the plant and the model.
Pro
ess identi
ation has been subje
t of resear
h for as long as models are
dened. The re
ent literature body in
ludes the review paper of Åstr'om and
Eykho, the book by Eykho and the book on the subje
t by Ljung (Ljung
(1987); Eykho (1974); Astroem and Eykho (1971)). The subje
t has also
107
108 CHAPTER 6. SYSTEM IDENTIFICATION
been of interest in the statisti
s
ommunity in parti
ular asso
iated with pa-
rameter identi
ation and signal pro
essing.
1 The two things may overlap in that a parameter appearing as a fa
tor in an expression
may eliminate the asso
iated term from the model as this parameter assumes the value zero.
The zero takes thus a somewhat spe
ial position when interpreting model stru
tures. This
fa
t is extensively used in network representations su
h as neural nets
6.2. DEFINING SYSTEM IDENTIFICATION 109
u
plant y
ex
itation plant's response
model (Θ) ŷ
model's response
parameters Θ
identi ation
6.2.1 Consequen
es
Having dened the task system identi
ation, it is also apparent that the iden-
tied models are a fun
tion of all the elements entering the pro
edure: the date,
the set of models and the
riterion.
The
riterion provides the measure, thus the result is obviously dependent on
what measurement sti
k is being used. The most
ommon
hoi
e is the
umula-
tive sum of squares, mostly be
ause of its ni
e mathemati
al properties. Mostly
it serves the purpose well and most people would not even spend a thought on
the issue thus mostly an un
ons
ious
hoi
e of
onvenien
e.
The model set has a rather obvious ee
t on the result as the parameters are
stri
tly speaking dened in the
ontext of the model. Not having a model in the
set straightforwardly means that it is not being
onsidered obvious indeed,
but not in a hidden
ontext.
The input, namely the ex
itation signal being used for the identi
ation period,
has a huge impa
t on the result. This is probably the most often ignored. One
often uses test signals without being aware what one is a
tually doing, whilst
it is not di
ult to get a quite detailed insight when using at the frequen
y
behaviour of the plant model. Figure 6.2 shows the frequen
y behaviour of two
models for the same pro
ess. The one with the steeper asymptote and higher
phase shift is the more
omplex one. Assuming that the more
omplex model
indeed des
ribes the plant better, one observes that the simpler model does very
well up to about 1Hz. Above the phase
hanges to the double quite qui
kly. If
one thinks about identi
ation, then one observes two major parameters, ea
h
110 CHAPTER 6. SYSTEM IDENTIFICATION
Bode Diagram
0
10
−2
10
Magnitude (abs)
−4
10
−6
10
−8
10
−10
10 flow rates:
cycle 1 : 10
0
cycle 2 : 2
inflow : 1
−90
Phase (deg)
−180
−270
−360
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (Hz)
Figure 6.2: Bode plot of two models, a
ompli
ated and a simplied
one.
6.3 Models
With models being the main obje
tive, it is put into the
entre, whilst the
methodologies asso
iated with identi
ation is put into the se
ond pla
e as it is
extensively treated in the literature; for example Ljung (1987); Eykho (1974);
Astroem and Eykho (1971).
Models are typi
ally
lassied using attributes su
h as linear, nonlinear, sto
has-
6.3. MODELS 111
with V(f ) = E [f i (Y ) − θ] [f i (Y ) − θ]T and were the matrix M−1 ,
alled the
θ
Fisher information matrix, is dened by
" T #
∂ log p(Y |θ) ∂ log p(Y |θ)
Mθ := E (6.7)
∂θ ∂θ
With the Equation (6.8) and Equation (6.16), the
ovarina
e of ∂ log∂θ p(Y |θ)
and
f (Y ) is
[f (Y ) − θ] V(f ) I
E [f (Y ) − θ]T ∂ log∂θ
p(Y |θ) =
(6.17)
∂ log p(Y |θ)
∂θ I M θ
yielding
V(f ) − Mθ−1 ≥ 0 (6.19)
Proof. Su
ien
y: Assume the theorem holds then Equation (6.17) be
omes:
(f (Y ) − θ)
E
(f (Y ) − θ) A(θ) [f (y) − θ]
A(θ) [f (y) − θ]
(6.21)
T
V(f ) V(f ) A (θ)
=
A(θ) V(f ) Mθ
whi h gives
and
hen e
h i h iT
Premultiplying with Mθ , −I and postmultiplying with Mθ , −I gives:
"" T #
∂ log p(Y |θ)
E Mθ [f (Y ) − θ] − ×
∂θ
" T #T
∂ log p(Y |θ)
× Mθ [f (Y ) − θ] − =0 (6.27)
∂θ
Consequently
∂ log p(Y |θ)
Mθ [f (Y ) − θ] = (6.28)
∂θ
whi
h proves the theorem.
Corollary (8.1). The proof also reveals that if the theorem applies then
A(θ) = Mθ , the Fisher information matrix.
Let the instan
e of the multiple-input, single-output, l-i-p model Equation (6.1)
be:
in order to get:
ŷ := FΘ ∈ Rn . (6.32)
In order to dene the
ost fun
tion, we rst dene the error as the dieren
e
between the response of the plant and the response of the model to the ex
itation
signal applied to both identi
ally:
and the ost fun tion being the Q-weighted sum of squares:
The equation Equation (6.40) is also
alled the normal equation. It is also
alled
the stating that the error is orthogonal to the fun
tion of the input, thus no
more information
an be extra
ted from the input.
Re-arranging to solve for the parameter ve
tor gives:
−1
Θ̂ := FT Q F FT Q y . (6.42)
Measurement noise is one of the most
ommon problems with measured data.
Making a
ouple of assumptions, it is straightforward to estimate the ee
t of
the measurement noise on the estimated parameters.
Let the additive measurement error be v , then the key assumptions are:
(6.45)
var (Θ) := var S y
:= S var y ST (6.46)
:= S I σ S 2 T
(6.47)
T
:= S S σ 2 (6.48)
−1 −1
:= FT F FT F FT F σ2 (6.49)
−1
:= FT F σ2 . (6.50)
The result is a symmetri
matrix
alled the varian
e-
ovarian
e matrix, the di-
agonal being the varian
es and the
o-diagonal the respe
tive
ovarian
es. The
ovarian
e implies that a
hange in the expe
tation (average) of one parame-
ter will also
hange the
orrelated parameter in the dire
tion and magnitude
indi
ated by the respe
tive
ovarian
e. As a normed measure one uses the
or-
relation.
6.4.1.2.1 Correlation
The
orrelation matrix is the varian
e-
ovarian
e matrix normed by the vari-
an
es:
" #
cov(Θi Θj )
R := 1/2
, (6.51)
(var (Θi ) var (Θj )) ∀i,∀j
cov(Θi Θj )
:= , (6.52)
σi σj ∀i,∀j
:= [ri,j ]∀i,∀j . (6.53)
One
an use the estimated parameters to predi
t the behaviour of the plant for
a parti
ular instan
e. Let the instan
e be
yi := f T (ui ) Θ + v . (6.54)
If one estimates the varian
e, the 2 is repla
ed by the respe
tive value from the
student t-distribution with the appropriate degrees of freedom and the
hosen
onden
e limit.
Having found the best parameters poses the question on how
ondent one
an
be in them. So how does the
ost fun
tion
hange with the parameters?
Let the
ost fun
tion be the identity-weighted version as given in equation Equa-
tion (6.36), then its
hange with the parameters is:
J(Θ) := eT (Θ) e(Θ) , (6.60)
T
:= y − F Θ y − FΘ ,
T
:= y − F Θ̂ − F Θ − Θ̂ y − F Θ̂ − F Θ − Θ̂
T
:= y − F Θ̂ y − F Θ̂
T T
− y − F Θ̂ F Θ − Θ̂ − Θ − Θ̂ FT y − F Θ̂
T
+ F Θ − Θ̂ F Θ − Θ̂ ,
T
:= J(Θ̂) + Θ − Θ̂ FT F Θ − Θ̂ , (6.61)
where we used the fa
t of equation Equation (6.40) twi
e for the middle terms.
Thus
T
J(Θ) − J(Θ̂) := Θ − Θ̂ FT F Θ − Θ̂ . (6.62)
This is an ellipsoide in the parameter spa
e. The length of the axis is given by
the eigenvalues of the matrix C := FT F whilst the eigenve
tors, whi
h, due to
the spe
tral theorem, are orthogonal, determine the dire
tion. 3
The α
onden
e limits of the parameters are given by the
orresponding value
of the F-distribution:
J(Θ) − J(Θ̂) ≤ k s2 Fk,n−k (α) . (6.63)
T
3 Sin
e C is symmetri
, C = V Λ V−1 = CT
= V Λ V−1 Thus VT = V−1 the
T T T
quadrati
form xT F Fx
an be rewritten as xT V Λ V x := zT Λz with z := V x.
6.4. POINT ESTIMATORS 119
Be
ause of the experimental errors one will not get the same response from the
plant when repeating an experiment using the same input. If the responses
are within the limits of the expe
ted output error, one has no reason to be
suspi
ious about the model appropriately des
ribing the pro
ess. If one tries
to t the same data a more
omplex model, one will nd no improvement.
Naturally if one performs more experiments, it may show that the model is
indeed not the best one
an nd. Latter aspe
t is used to design experiment
fo
using on the weak parts of the model.
The means to
he
k on the model is to analyse the varian
e for the various
ontributions. Again we start with the sum of squares of the error, the
ost
fun
tion Equation (6.36) whi
h we expand:
T
eT e := y − F Θ (6.65)
y − FΘ
:= yT y − ΘT FT y − yT F Θ + ΘT FT F Θ (6.66)
:= yT y − ΘT FT F Θ − ΘT FT F Θ + ΘT FT F Θ (6.67)
T
:= y y − Θ F F Θ T T
(6.68)
(6.69)
Isolate the total sum of squares over the outputs:
yT y := ΘT FT F Θ + eT e . (6.70)
The total sum of squares is thus the sum of the regression sum of squares plus
the rest sum of squares. Ea
h of these terms is
onne
ted to a degree of freedom
being used to
ompute the respe
tive term. The sum of squares of the outputs
uses n :: number of observations. The regression sum of squares is
omputed
from k :: number of parameters normal equations. Thus the dieren
e n − k is
are the left degrees of freedom for the rest sum of squares.
It is
ustomary to show this in a table:
SSQ DOF
total SSQ yT y n
regression SSQ Θ FT F Θ
T
k
rest SSQ eT e n−k
T
e e
One
an show that if n−k estimates the varian
e of the experimental error,
then the model is des
ribing the pro
ess appropriately. If we take the rest SSQ
120 CHAPTER 6. SYSTEM IDENTIFICATION
divided by the respe
tive degrees of freedom as an estimate for the varian
e,
thus
eT e
s2e := , (6.71)
n−k
and knowing the a
tual varian
e of the experimental error to be σ 2 then the
ratio of the estimated varian
e and the n − k s
aled varian
e is χ2 distributed
thus algebrai
ally:
n−k
s2 ∼ χ2n−k . (6.72)
σ2
One has good reasons do de
lare the model as not tting well and thus re
onsider
its stru
ture if :
s2 χ2 (α)
> , (6.73)
σ2 n−k
α being the
onden
e limit.
The varian
e of the experimental error is usually not known and must be esti-
mated from the data. Assuming that we make ni experiments for the input ui
and obtain a
orresponding set of responses yi and repeat the experiments by
varying i := 1, . . . , q , then the estimate for the varian
e is
omputed by :
Pq 2
i=1 y − E y
2
se := Pq (6.74)
i=1 (ni − 1)
Pq 2
i=1 y − E y
:= Pq . (6.75)
i=1 ni − q
The −1 thus
the redu
tion of the degree of freedom by one is due to the mean
being E y , whi
h is
al
ulated from the same data. So for the s2e total degree
of freedom is:
q
X
ne := ni − q (6.76)
i=1
If the model ts well, then the experimental error is also estimated by the rest
sum of squares. The two varian
e estimates
an be
ompared with ea
h other
as one
an show that their ratio is F-distributed with the respe
tive two degrees
of freedom.
If the ratio gets too large the model does not t well and one may
onsider the
model to be a bad t:
1 eT e
> F (n − k, ne ) . (6.77)
n − k s2e
The above test assumes that the varian
e is estimated with one set and the
parameters with another. It is, though, meaningful to use all experiments for
the regression and split the varian
e a
ordingly:
6.4. POINT ESTIMATORS 121
total SSQ yT y n
Dening the varian
es:
Pq 2
eT e − i=1 y−E y
s2ef := , (6.78)
n − k − ne
Pq 2
i=1 y−E y
s2e := , (6.79)
ne
the la
k of t test is then:
s2ef
≤ Fn−k−ne ne (6.80)
s2e
(6.81)
to a
ept the model.
Identi
ation is an iterative pro
ess. One ts a model,
he
ks if it ts well
and if not, modies the model until one is satised. The la
k-of-t measure
is thus used as the de
ision
riterion if or if not a modied model should be
adopted: The la
k-of-t for the new model
ompared to the old model must be
statisti
ally signi
ant better. An appropriate F-test provides the information.
6.4.1.6 Bias
Under
ertain
ir
umstan
es the estimator will not deliver the desired result, but
an estimate that is
ontaminated with a bias. With E [Θ] being the estimated
parameters and Θ the true parameter values, a biased estimator is dened as:
Θ̂ := E [Θ] := Θ+b. (6.82)
If b is not equal zero, the estimator is
alled biased, otherwise the estimator is
unbiased.
This is unfortunately a very
ommon
ase, as one often does not know what
variables do ae
t the output of the plant. The linear model that one identies
thus may not in
lude all those variables and the ee
t is a bias in the estimate.
To show the ee
t, let the plant be represented by:
z := f T (u) Θ + gT (u) Θ . (6.83)
122 CHAPTER 6. SYSTEM IDENTIFICATION
The model to be tted shall be identi
al to the rst term of the plant, thus the
se
ond term is the omitted one, whi
h we abbreviate as v .
y := f T (u) Θ . (6.84)
Consequently one
an write the plant output as:
z := y + v , (6.85)
Using the model as a basis and the analogue sta
king of the individual experi-
ment instan
es, the estimator Equation (6.42) is
−1
Θ̂ := FT F FT z (6.86)
−1
FT F FT y + v (6.87)
:=
−1
:= Θ + FT F FT v . (6.88)
Bias is also introdu
ed into the parameter estimation if the input has a sto
hasti
omponent. The mathemati
al treatment of this
ase is rather involved and
losely linked to the derivation of the Kalman lter.
FT y := FT F Θ . (6.92)
The estimate will be unbiased if the term FT e has zero mean, whi
h is not the
ase when the error is
orrelated. The instrumental variable method repla
es
the FT matrix by an instrumental variable matrix WT in above's manipulation.
It is a matrix whi
h is a fun
tion of the data with the properties
h i
E W F
T
:: not singular (6.93)
h i
T
E W e := 0 . (6.94)
6.5. SELECTED DYNAMIC SYSTEMS 123
whi h is unbiased.
For dynami
systems (Se
tion 6.5), most
ommonly a ltered input is used as
instrument where the lter's dis
rete transfer fun
tion may be
D(q)
g(q) := . (6.97)
C(q)
An attra
tive alternative is the modulating fun
tion lters introdu
ed by Maletinsky
(1978); Preisig (1984); Preisig and Rippin (1993a,b).
family of models. In terms of overall stru
ture, one distinguishes between three
stru
tures
omputing the error in the three dierent ways: 1) equation error, 2)
output error, 3) input error.
or
B(q) 1
y(k) := u(k) + e(k) (6.103)
A(q) A(q)
with:
X
A(q) := 1+ ai q −i , (6.104)
A
X
B(q) := bi q −i , (6.105)
B
A := {i := 1, . . . , n} , (6.106)
B := {i := 1, . . . , m} , (6.107)
The ARX a
ronym derives from the statisti
s literature labelling the dierent
terms with:
This model
an be
ast is linear in the parameters and results a standard linear
regression problem. Let:6
In the ARX
ase we
ould
ast the parameter estimation problem into a simple
linear regression form. In order to nd a similar form, we rst have to
onstru
t
an estimate for the output for the ARMAX pro
ess. For this derivation we
ompa
t the notation:
with
B(q)
G(q) := (6.116)
A(q)
C(q)
H(q) := (6.117)
A(q)
X∞
:= 1+ hi q −i . (6.118)
i:=1
The varian
e of the error is thus s
aled su
h that the H(q) polynomial is moni
,
d.h. the leading
oe
ient is 1. We further dene:
Thus:
∞
!
X
v(k) := e(k) + hi q −i e(k) , (6.120)
i:=1
:= e(k) + (H(q) − 1) e(k) . (6.121)
With the data being known at the time k − 1, the se
ond term is a
tually know
and the expe
tation of the error is zero, thus
v̂(k|l − 1) := (H(q) − 1) e(k) , (6.124)
:= (H(q) − 1) H −1 (q) v(k) , (6.125)
:= 1 − H −1 (q) v(k) . (6.126)
The one step predi
tor for this generi
model, analogue to Equation (6.133), is:
D(q) A(q) D(q) B(q)
ŷ(k|Θ) := 1− y(k) + u(k) . (6.145)
C(q) C(q) F (q)
The following table, also taken from Ljung (Ljung (1987)) shows the models
and their names depending on what polynomials are used in the general model:
B, F OE output error
B, F, C, D BJ Box-Jenkins
This model
an also be
ast into the pseudo-linear regression form. Again
dening the error:
ǫ(k, Θ) := (y(k) − ŷ(k|Θ)) (6.146)
one nds:
D(q) B(q)
ǫ(k, Θ) := A(q) y(k) − u(k) . (6.147)
C(q) F (q)
Introdu
ing the variables:
B(q)
w(k, Θ) := u(k) (6.148)
F (q)
v(k, Θ) := A(q) y(k) − w(k, Θ) , (6.149)
this simplies to:
D(q)
ǫ(k, Θ) := v(k, Θ) . (6.150)
C(q)
128 CHAPTER 6. SYSTEM IDENTIFICATION
With :
one has the model again in the pseudo-linear regression Equation (6.143) form.
The derivation of the lter
an be done in many dierent ways, in
luding the
orthogonality prin
iple, Bayes' theorem, sequential minimal sum of squares,
gradient sear
h method for sum of squares and others. We shall not derive the
lter, but refer the interested reader to the literature for example Jazwinski
(1970) whi
h is still one of the books with the most thorough treatement of this
subje
t.
The Kalman lter works in two steps:
Predi
tion: of the state and the estimate's
ovarian
e
pro ess w v
u x
Γ 1 C
q
y
Φ
e
K
−1
x̂
Γ 1 C ŷ
q
Φ lter
The Kalman lter is a state variable lter, meaning that the output of the lter
provides an estimate of the state given a ve
tor of observations. The main
use of this lter is to re
onstru
t the state from a set of measurements. It is
thus also
alled an observer. There other observers known in the literature, in
parti
ular the Luenberg observer, whi
h diers from the Kalman lter mainly by
having a xed gain. The gain of the Luenberg observer is designed by setting
the dynami
s of the error propagation setting the poles of the set of linear
dierential equations that evelove from the derivation of the residual error .
130 CHAPTER 6. SYSTEM IDENTIFICATION
log(|g|)
log(ω)
Figure 6.4: The
hoi
e of frequen
ies is essential for the identi-
ation experiment
This
on
ept applies dire
tly to the plant identi
ation problem: if one wants
to obtain information about the plant in a
ertain time s
ale, then one needs
to ex
ite it in that time s
ale, whi
h dire
tly translate into applying a
ertain
frequen
y as an ex
itation input. This
an be ni
ely demonstrated in the
ase
6.7. THE EXCITATION SIGNAL 131
shown in Figure 6.4. It shows a Bode amplitude plot. The bla
k line represents
the behaviour of the true plant. A rst-order model is being t, whi
h has two
parameters, the gain and the time
onstant. One thus needs at least two inde-
pendent experiments that provide the information ne
essary to nd estimates
for the two parameters. In the red
ase, the two sinusoidals indi
ated by the
two dotted red lines are being used and in the green
ase it is the
orrespond-
ing green dotted lines that indi
ate the input signals. The result is obviously
dierent. In the red
ase the low-frequen
y behaviour is
aptured whilst in the
green
ase more of the high-frequen
y behaviour is ree
ted into the model.
The literature is ri
h on dis
ussions and suggestions of what type of input signals
should or
an be used. In many
ases people aim at identifying a plant as a kind
of whole, meaning that they do not think in time s
ales and thus hierar
hi
al
models. If one nds a split in the time s
ale, it is almost always feasible to work
with two models, one for the high-dynami
range and one for the low-dynami
range. For example, it is quite thinkable that the red model is used to des
ribe
the plant in Figure 6.4 in the low-frequen
y range whilst in the high-frequen
y
range the green model is used.
If one indeed aims at identifying the whole plant, one must provide a model
that is able to
apture the behaviour, thus is ri
h enough. Having su
h a model,
one then must ex
ite the plant persistently, meaning with a signal that is ri
h
enough in frequen
y
ontents. For more details on the denition of persistant
see for example Ljung (1987); Eykho (1974), whi
h also in
lude referen
es to
work on this subje
t.
Obviously one of the simple solutions for the latter problem is to use all frequen-
ies, for example a random signal. Sin
e this may not be trivial to apply, one
often uses signals that are
oming
lose, su
h as random binary signals. Adding
a variation in the amplitude gives multi-level random signals.
From the pra
ti
al point of view, one should also keep the signal-noise ratio in
mind. If one applies for example a random binary signal, the energy one has
usually available in a limited amount, is spread over all the frequen
ies equally,
at least ideally. In any
ase it is spread making the signal of the individual
frequen
y less strong and thus more likely to be
overed by noise
omponents
a
ting on the equipment, for example the measurement devi
es. It is thus often
better to apply a frequen
y or a sele
ted set of frequen
ies. In any
ase though,
one should keep in mind in what time-s
ale the model will be used and thus to
what detail the pro
ess should or must be des
ribed.
dis
ussed in the earlier
hapters is typi
ally a mix of the two. The foundation
is usually me
hanisti
and the more one gets into the internal details, what is
often referred to as the
onstitutive equations, one has less and less information
about the basi
nature and bla
k-box models are being used sele
ted on expe-
rien
e: The
onservation
on
epts of physi
s are
onsidered as me
hanisti
and
so are large parts of the ma
ros
opi
theory-based des
ription of the hydrauli
s
of a plant. As one gets into the details of transport and in into the des
rip-
tion of material properties and rea
tions, the understanding of the underlying
pro
esses be
omes thinner and thinner or more and more involved so that one
usually has to resort to essentially empiri
al models. Often some remainders
of the underlying
on
epts are preserved ree
ting into the fun
tional form the
empiri
al model takes.
Asking the following three questions leads stepwise to a pro
ess model:
Some of the
hara
teristi
s are not available through dedu
tive studies and must
be identied using pro
ess identi
ation te
hniques. The experiments are to be
designed to provide most information as possible. Design of experiments has its
roots in the statisti
al literature whi
h refers to the inputs or stimuli signals as
fa
tors(Box et al., 1978). The most e
ient way of arranging experiments is in
blo
ks meaning a set of experiments whi
h modies the input levels systemati-
ally. Potentially to ea
h input a step is being applied. One waits long enough
to get su
iently
lose to the steady state value of the observation, whi
h im-
plies that one has to wait for at least 5 times the maximal time
onstant in the
plant. The input levels of the inputs are
hanged su
h to form an orthogonal
plan. If the model is nonlinear in the inputs this statement is slightly modied
in that it is the fun
tion f (u) that is the obje
t of an orthogonal design.
Let f o (uo ) be
the entries ∆f (∆u) in the diagonal matrix being positive. A plan matrix S lists
all
ombination of +1 and −1. For example for a 3-input systems, one gets the
6.7. THE EXCITATION SIGNAL 133
S matrix:
+1 +1 +1
−1 +1 +1
+1 −1 +1
−1 −1 +1
(6.165)
S :=
.
−1 +1 −1
−1 +1 −1
+1 −1 −1
−1 −1 −1
Where the rows are the experiments and the
olumns indi
ate the input vari-
ation. The arrangement
hosen here is
alled the standard form due to Yates
(Box et al., 1978)8
Let
D := diag[∆f (∆u)] . (6.166)
F := Fo + D ST (6.167)
Performing experiments as the
entre, averaging them and subtra
ting them
from the measurement obtained when exe
uting the plan, one gets
T
y − ȳ0 := D ST Θ, (6.169)
:= S D Θ . (6.170)
The ST S is orthogonal and thus makes the regression analysis extremely simple.
8 Box, Hunter and Hunter (Box et al., 1978) explain the details of the algorithm in
hapter
10.
134 CHAPTER 6. SYSTEM IDENTIFICATION
6.7.1.3.1 Randomising
Trends
an be also be redu
ed in the
ase when one deals with similar items /
plants, whi
h are to analysed modeled ex
iting the
orresponding inputs. For
example if one has two dierent pair of shoes and wants to know their ability to
absorb a dye and grease, the shoes are the items / plants and will be blo
ked.
The experiments are then to apply grease / dye respe
tively in
ombinations. An
experimental plan is then established for ea
h blo
k, whi
h in turn is randomized
as dis
ussed in Se
tion 6.7.1.3.1.
There exist several
riteria of optimality. The traditional ones build on the
invariants of the Fisher information matrix M in
luding:
Appendix: Mathemati
al
Components
A matrix:
a1,1 a1,2 ... a1,m
a2,1 a2,2 ... a2,m
A := .
.. .. ..
(7.3)
.. . . .
an,1 an,2 ... an,m
:= [ai,j ]i:=1,...,n,j:=1,...,m ∈ Rn×m (7.4)
137
138 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
Inner produ
t:
X
xT y := xi yi (7.7)
∀i
Outer produ
t:
x yT := [xi yi ]∀i,∀j (7.8)
Matrix produ
t:
X
C := A B := ai,j bj,k (7.10)
∀j
∀i,∀k
Inverse:
A−1 := |A|−1 adj(A) (7.11)
Example 2 x 2 matrix:
a1,1 a1,2
A := (7.12)
a2,1 a2,2
a2,2 −a1,2
adj(A) := (7.14)
−a2,1 a1,1
(7.15)
7.2. ANALYSIS 139
7.2 Analysis
g(x)
∂ ∂g(x) ∂f (x)
Z
dx′ F (x, x′ ) := F (x, g(x)) − F (x, f (x))
∂x f (x) ∂x ∂x
g(x)
∂
Z
+ dx′ F (x, x′ ) . (7.20)
f (x) ∂x
f (λ x1 , . . . , λ xk ) := λn f (x1 , . . . , xk ) . (7.21)
k
X ∂f (x1 , . . . , xk )
n f (x1 , . . . , xk ) := xi . (7.22)
i:=1
∂xi
140 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
ϕ1 := ϕ1 (ϕa , ϕb ) . (7.26)
ϕ2 := ϕ2 (ξ , ϕb ) , (7.27)
:= ϕ1 − ξ ϕa . (7.28)
The last equation
an be interpreted as a tangent plane to the original fun
tion
with slopes in the dierent dire
tions,
olle
ted in the Ja
obian:
∂ϕ1
ξ := , (7.29)
∂ϕTa
∂ϕ1
dϕ2 := dϕb − ϕTa dξ T . (7.32)
∂ϕb
7.2.5 Examples
The Legendre transformations are basi
to thermodynami
s. For example, let
ϕ1 := ϕ1 := U (S, V, n) , (7.33)
ϕ2 := ϕ2 := H(T, V, n) . (7.34)
and
ϕa := ϕa := S , (7.35)
T T
ϕb := [V, n ] . (7.36)
Thus
∂ϕ1 ∂U (S, V, n)
:= , (7.37)
∂ϕb ∂[V, nT ]
∂U (S, V, n)
:= [p, ], (7.38)
∂nT
∂U (S, V, n)
ξ := , (7.39)
∂S
:= T . (7.40)
142 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
∂f ∂f ∂f
grad f := , , . (7.41)
∂x ∂y ∂z
The operator
url maps a ve
tor eld into another ve
tor eld:
rot
ve
tor eld v −→ ve
tor eld rot v
∂v3 ∂v2 ∂v1 ∂v3 ∂v2 ∂v1
rot v := − , − , − , . (7.43)
∂y ∂z ∂z ∂x ∂x ∂y
7.3. VECTOR ANALYSIS 143
The Lapla
e operator △ maps a s
alar eld into another s
alar eld:
△
S
alar eld f −→ s
alar eld △ f
∂ ∂ ∂
∇ := , , . (7.45)
∂x ∂y ∂z
7.3.2.6 Relations
7.3.3 Flow
The ow through a surfa
e S with the normal ve
tor being n being lo
ated in
a ow eld v is:
ZZ
fˆ := v · n dS . (7.55)
S
Example :
7.4. GRAPH THEORY 145
b
B F
a
A d e
i
C E g D f
h
V (G) := {A, B, C, D, E, F } ,
E(G) := {a, b, c, d, e, f, g, h, i} .
ν(G) := |V (G)| = 6 .
ǫ(G) := |E(G)| = 9 .
E g D B F
link a
A e
i
C E
h D f C E g D f
multiple edge
not simple loop simple not simple
h
j i
b
A B 5 1
m n
l 2
a C e f 4
d k
g
D E F 3
Simple graph : A graph is simple if it has no loops and no two of its links
join the same pair of verti
es.
Complete graph : If ea
h pair of distin
t verti
es is joined by an edge is
alled a
omplete graph. Not
onsidering isomorphism, there is only one
omplete graph with n verti
es, whi
h is denoted by Kn .
Empty graph : A graph is empty if it
ontains no edges.
Finite graph : A graph is nite if both its vertex set and the edge set are
nite.
Trivial graph : A graph with only one vertex is
alled trivial , all others
non-trivial.
Identi
al graphs : Two graphs G and H are
alled identi
al if V (G) = V (H),
E(G) = E(H) and f (G) = f (H)
Bipartite graph : A bipartite graph is one whose vertex set
an be partitioned
into two subsets X and Y , so that ea
h edge has one end in X and the
other end in Y . The partition of a graph's verti
e V (G) = (X(G), Y (G)
is
alled a bipartition of the graph G.
Complete bipartite graph : Is a graph, whi
h is a simple bipartite graph,
with the bipartition X and Y in whi
h ea
h vertex of Y is joined to ea
h
vertex of Y . If m, n denote the
ardinality of the two sets X and Y ,
respe
tively, then the graph is denoted by Km,n . This
on
ept
an be
extended to k -partitioned graphs.
In
iden
e matrix : Any graph
an be represented in a ν × ǫ matrix, where
ν := |V (G)| and ǫ := |E(G)|. The in
iden
e matrix of G is the matrix
M(G) := [mi,j ] with mi,j being the number of times that a verti
e vi and
an edge ej are in
ident.
7.4. GRAPH THEORY 147
X Y
A a
B b
D f
Figure 7.4: A bipartite graph showing the two sets on the left and
the right. The shown graph is also
omplete.
a b c d e f g h
A −1 −1
B +1 −1 −1
M := C +1 +1 −1
D ±1 −1
E −1 +1 +1
F +1 +1
A B C D E F
A 1 1
B 1 1
A := C 1
D 1 1
E 1
with the rows being the sour
e nodes and the
olumn the
sink nodes.
148 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
Let G1 and G1 be two non-empthy graphs and dene the union as a new graph:
G := (V (G1 ) ∪ V (G2 ), E(G1 ) ∪ E(G2 ))
Subgraph : G1 and G2 are subgraphs of G.
Subergraph : G is a supergraph to G1 and G2
Disjoint : If G1 ∩G2 := ∅ this being short for (V (G1 ) ∩ V (G2 ), E(G1 ) ∩ E(G2 )) =
(∅, ∅) then the two graphs are disjoint.
Indu
ed subgraph : The subgraph G1 := G[V1 ] is
alled an indu
ed subgraph
of G if E1 is the subset of E that have both ends in V1 .
Edge-indu
ed subgraph : The subgraph G1 := G[E1 ] is
alled an edge-
indu
ed subgraph of G if V1 is the subset of V with the set of ends E1 .
Underlying simple graph : The spanning subgraph of G is obtained by
deleting all loop and redu
e all multiple edges single edges.
Spanning subgraph : Is a subgraph with identi
al verti
e sets. H is a
spanning subgraph of H if V (H) = V (G).
Walk : A walk W in the graph G is a nite non-null sequen
e of alternating
verti
es.
origin, terminus : lines starting with the verti
e v0 ,
alled the origin, and
ending with verti
e vk ,
alled the terminus, thus W := v0 e1 v1 e2 . . . ek vk .
Ends : W is
alled a trail if the edges are distin
t.
Path : If in addition the verti
es are distin
t, the walk is
alled a path.
Closed walk : A walk is
losed if it is of positive length and the origin and
terminus are identi
al.
Cy
le : A
y
le is a
losed walk whi
h has passes through distin
t verti
es.
Conne
ted graph : Two subgraphs G[Va ] and G[Vb ] are
onne
ted if there
exists at least one verti
e having ends in ea
h of the sets Va and Vb .
Otherwise the graph is
alled dis
onne
ted and the two subgraphs G[Va ]
and G[Vb ] are
alled
omponents of G.
7.5. SINGULAR PERTURBATION AN INTRODUCTION 149
A
a b walk : A-b-C-
-B-d-D-d-B-a-A-b-C
trail: A-b-C-
-B-d-D-e-E-g-B
B C path: A-a-B-d-D-e-E-f-C
losed walk: A-a-B-d-D-e-E-g-B-
-C-b-A
d g f
y
le: A-a-B-g-E-f-C-b-A
D E
e
Figure 7.5: A general walk and spe
ial walks: trail, path,
losed
walk and
y
le
A A
a b a b
B C B C
d g f
D E D E
e e
Figure 7.6: Conne ted and dis onne ted graphs, omponents
subsystem and a fast se ond subsystem both being intimately oupled together:
First we assume that the rst equation dominates and set the small number
ε := 0, thus a pseudo-steady-state assumption is made for the se
ond equation.
This yields what is
alled the outer solution :
thus
zo := −A−1
22
A21 xo (7.61)
Using the result in the rst matrix equations yields step-wise the outer solution
xo (t) = eS t x0 (7.66)
The output for the outer solution (indi ated by a subs ript o) is then
The outer solution is thus a simple exponential as it was probably expe
ted.
This outer solution des
ribes the system approximately in the large time s
ale,
but what about the small time s
ale, parti
ularly at the beginning of a
hange ?
This solution,
alled the inner solution, is
onstru
ted by s
aling. In this
ase
the s
aling is done in the time s
ale. Let
τ := t/ε (7.70)
7.5. SINGULAR PERTURBATION AN INTRODUCTION 151
Then
dxi
ε−1 = A11 xi + A12 zi (7.71)
dτ
dxi
= ε (A11 xi + A12 zi ) (7.72)
dτ
dxi
≈0 → xi (τ ) :≈ x0 (7.73)
dτ
dz
ε ε−1 i = A21 xi + A22 zi (7.74)
dτ Z τ
′
A τ 0
zi (τ ) = e 22 z + eA22 (t−t ) A21 x0 dt′ (7.75)
0
τ
A τ 0 A t′
= e 22 z + A−122
e 22 A21 x0 dt′ (7.76)
t′ :=0
A τ A τ
=e 22z0 + A−1 (e 22 − I) A21 x0 (7.77)
22
A τ A τ
yi (t) := C1 x0 + C2 e 22 z0 + A−122
(e 22 − I) A 21
x0
(7.78)
Having the outer and the inner solution available, a
ombined solution may be
onstru
ted by adding the two solutions together and subtra
ting the
ommon
parts of the two. For a s
alar observation one
ould suggest:
yc (t) := yo (t) + yi (t) − c(t) (7.79)
where the last term represents the
ommon part of the two solutions. In this
ase, this
ommon part is extremely simple as it is just a
onstant whi
h
an be
found easily by analysing the end value :
yc (t → large) = yo (t → large) (7.80)
⇒ yi (t → large) = c(t) (7.81)
Thus
lim yi (t) = C1 x0 + C2 A−1
22
(−I) A21 x0 (7.82)
t→∞
= (C1 − C2 A−1
22
A21 ) x0 (7.83)
7.5.1.4 Example
The atta hed gures show the simulation results for a system :
A11 := −5 A12 := 1
A21 := 1 A22 := −1
C1 := 1 C2 := 1 (7.84)
x0 := 10 z0 := 5
ε := 0.01
152 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
18 outer solution
inner solution
approximate solution
16 exact solution
14
output 12
10
2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
time
Figure 7.7: Inner solution, outer solution,
ombined approximate solution
om-
pared with the exa
t solution
dxs
slow = f s (xs , xf ) , (7.85)
dt
dxf
fast ǫ = f f (xs , xf ) . (7.86)
dt
In the literature, the slow system is
alled the degenerate system and the fast is
alled the adjoined system.
• The initial values xos are in the domain of inuen
e of the stable singular
point of the adjointed system, i.e. the system will evolve to the isolated
solution z̄
7.6. INDEX OF DIFFERENTIAL ALGEBRAIC EQUATIONS 153
0.25
0.2
error
0.15
0.1
0.05
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
time
• The solutions of the system Equation (7.85), Equation (7.86) are unique.
For a more detailed exposition see (Vasileva Adelaida B, 1995) and (Kokotovi
et al.,
1999)
the index (dierential index) k of the (non)linear, su
iently smooth DAE is
the smallest k su
h that
7.7 Optimisation
7.7.1 General Problem
min F (x)
x∈Rn
Infeasible problem: R := 0
Optimal point: x∗
δ - Neighbourhood of x: N (x, δ)
The fun tion F (x) is smooth and at least twi e- ontinuously dierentiable.
min f (x)
x∈R1
∂f (x)
1. ∂x ∗ := fx (x∗ ) := 0
x
7.7. OPTIMISATION 155
∂ 2 f (x)
2. ∂x2 x∗ := fxx (x∗ ) ≥ 0
To proof the above
onditions expand the fun
tion f (x) in a Taylor series about
the optimal point:
1 2
f (x∗ + ǫ) := f (x∗ ) + ǫ fxx (x∗ ) . (7.92)
2
156 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
P (AB)
P (A|B) =
P (B)
where p(x) is the probability density fun
tion
hara
terising the
ontinuous
random variable x. If the random variable is dis
rete, the integral is repla
ed
by
orresponding summations.
The theorem is often used with Aj denoting a statement about an unknown phe-
nomenon, whilst B presents the known information about the pro
ess. P (Aj )
is denoted as prior probability, P (Aj |B) as posterior probability and P (B|Aj )
as likelihood.
x̄ := E [xi ] (7.96)
X
:= xi p(xi ) x :: dis
rete (7.97)
i
x̄ := E [x] (7.98)
Z +∞
:= x p(x)dx x ::
ontinuous (7.99)
−∞
The se
ond
entral moment is the varian
e. The third
entral moment is
alled
skewness and the fourth is
alled kurtiosis.
E [a x + b] := a E [x] + b (7.112)
E [x + y] := E [x] + E [y] (7.113)
E [x y] := E [x] E [y] if x and y are un
orrelated (7.114)
h i
var (x) := E (x − E [x])2 (7.115)
2
(7.116)
2
:= E x − (E [x])
≥ 0 (7.117)
= 0 for x :=
onst (7.118)
var (a x + b) := a2 var (x) (7.119)
var (x + y) := var (x) + var (y) (7.120)
if x and y are independent
ov (x, y) := E [(x − E [x]) (y − E [y])] (7.121)
:= E [(x y − x E [y] − E [x] y + E [x] E [y])](7.122)
:= E [x y] − E [x] E [y] (7.123)
ov (x, y)
ρ (x, y) := p (7.124)
var (x) var (y)
ov (x, y)2 ≤var (x) var (y) (7.125)
= 1 if x and y on straight line (7.126)
= 0 if x and y are independent (7.127)
ov (a x + b, c y + d) := a c
ov (x, y) (7.128)
ov (x + y, z) :=
ov (x, z) +
ov (y, z) (7.129)
7.8. ELEMENTS OF STATISTICS 159
Also
E [x] := E [E [x|y]] (7.135)
var (x) := E [var (x|y)] + var (E [x|y]) (7.136)
1 1 (x−µ)
2
p(x) := √ e 2 σ2 (7.143)
2 πσ
E [x] := µ (7.144)
var (x) := σ 2 (7.145)
x
(
1
e− µ x≥0
p(x) := µ
(7.146)
0 else
E [x] := µ (7.147)
var (x) := σ 2 (7.148)
Mostly used as initial
ondition in re
ursive pro
esses and random number gen-
eration whi
h thereafter are transformed.
(
1
a<x<b
p(x) := b−a (7.149)
0 else
a+b
E [x] := (7.150)
2
b−a
var (x) := (7.151)
12
7.8.3.3 F-Distribution
Things to Know
A+B ⇒ C (8.1)
2B + C ⇒ D (8.2)
2A ⇒ F + G (8.3)
G+B ⇒ E (8.4)
A := A B C D E F G (8.5)
-1 -1 1
-2 -1 1
(8.6)
N :=
-2 1 1
-1 1 -1
163
164 CHAPTER 8. THINGS TO KNOW
Chapter 9
165
166 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
• A toilet
• A oee ma hine
q̂E|S
bulk E S
ŵS|E
The part if interest is the sensor, thus we model the sensor dynami
s having
already assumed that it
an be seen as an internally fast system:
dES
:= q̂E|S − ŵS|E . (9.1)
dt
9.1. PROCESSES 167
The heat ow model approximates the behaviour of the lm by:
q̂E|S := −kE|S AE|S (TS − TE ) , (9.2)
kE|S := given , (9.3)
AE|S := given . (9.4)
and the system volume work term, representing the
hange of the volume by:
dVS
ŵS|E := pS . (9.5)
dt
Assuming we know the heat
apa
ity as a fun
tion of time as a produ
t of the
known volume, known,
onstant density and the spe
i
heat
apa
ity in the
form of a polynomial with the known parameters {ai }:
Cp (T ) := V ρ cp (T ) , (9.13)
X
cp (T ) := ai T i , (9.14)
i
ai := given , (9.15)
V := given , (9.16)
ρ := given , (9.17)
168 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
the model is
ompletely spe
ied and proper. The dynami
s of the sensor are
driven by the temperature of the environment, a fun
tion of the given
onditions
and parameters.
Ho Ḣ q̂ k A T TE
H(Cp (T ))
impli
it in T
V ρ cp (T )
{ai } T
9.1.2.2 Solution
The dynami s of the pro ess has at least three dierent time s ales:
• The fastest is the time in whi
h the diusion prole is being established
• The se
ond is the time s
ale in whi
h the diusion is fast
ompared to
the
hange of the volume of the water in the glass. The justi
ation for
this assumption is that the volume of the evaporated water is in 3 order
of magnitudes larger in terms of the volume.
• The third is the slowest one in whi
h the level of the water in the glass
hanges.
zE
zW
Figure 9.3: A half full glass of water in a room with uniform
on-
ditions
On the se
ond level one assumes that the level of the water is
onstant, whilst
the diusion is fast. Dividing the diusion equation by the diagonal diusion
matrix, and letting the individual diusion
oe
ient to go to innity:
∂c(z, t) ∂2µ
lim λ−1 := , (9.19)
λ→∞ ∂t ∂z 2
whi
h puts eliminates the state of the diusion system.
The prole of the driving for
e along the length is thus a linear fun
tion:
µ(z) := a z + b . (9.20)
µW := µD (zW ) , (9.21)
µoW + R T ln 1 := µoD (zW ) + R T ln xG (zW ) . (9.22)
Where the fa
t that the water is the only spe
ies in the water lump has been
onsidered. With pB being the known barometri
pressure and
p∗W
x∗G (zW ) := , (9.23)
pB
x∗G − xE
a := . (9.24)
zW − zE
9.1. PROCESSES 171
D n̂W |E
Figure 9.5: On the se
ond time s
ale the diusion system has "lost"
the
apa
ity. It is redu
ed to a pure resistan
e.
The rest is simple now. The mass balan
e for the water lump is drawn up:
ṅW := −n̂W |E , (9.28)
172 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
D n̂W |E
Figure 9.6: On the third time s
ale the volume of the water body is
hanging, thus the level dropping.
mass:
nW
VW := , (9.30)
ρW
:= AW zW . (9.31)
This set of equations is to be solved for the se
ondary state variable in question,
namely zW :
nW
zW := . (9.32)
AW ρW
The rest of the variables are known: k, zE , xE , µoL , µoG , R, T . Thus the resulting
set of equations is well dened if one in addition spe
ies the initial
onditions,
the problem
an be integrated, a
tually in this
ase analyti
ally.
9.1. PROCESSES 173
a b
a b
V̂a|c V̂b|c
n̂a|c n̂b|c
n̂c|d
V̂c|d
d
d
9.1.3.1 Solution
The model of a tank with several inputs and outputs is des
ribed as an ideally-
stirred tank rea
tor. The energy balan
e for the system is not of interest, as
no ex
hange of energy o
urs. Thus it is only the
omponent mass balan
es to
be established. The
omponent mass balan
es are a set of ordinary dierential
equations in the
omponent mass, whi
h for this task we shall transform into
dierential equations in the
on
entration and the volume. Assuming that there
is no rea
tion taking pla
e in any of the tanks, the
omponent mass balan
es
for an arbitrary system S are:
dnS X
= αm n̂m + ñS (9.33)
dt
∀m
With the αm ∈ {−1, +1} giving the referen
e dire
tion, n̂m the mass ow m
and ñS the rea
tion dependent transformation rate.
174 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
9.1.3.1.2 Transfer
There is no transfer law given, but it is assumed that the volumetri
ow is
known, thus the transfer is given by:
n̂m := km V̂m cm , (9.34)
km :=
ontrolled, thus known , (9.35)
V̂m := known . (9.36)
The km has been introdu
ed merely to demonstrate on where the
ontroller
would be
onne
ted. In this
ase, the volumetri
ow rate would be the maxi-
mum available and this variable would be adjusted by the
ontroller between 0
and 1.
The
on
entration is the one from the tank the uid is
oming from. Mostly
people assume that it may only possibly
ome from one tank at all time, that is,
the ow dire
tion never
hanges. This may or may not be a valid assumption.
Here it seems though reasonable. If this is not the
ase, then the
on
entration
swit
hes as the volumetri
ow
hanges sign!
In this se
tion, all variables ex
ept the fundamental state, whi
h are the
on-
served quantities, are to be linked ba
k to the fundamental state and known
quantities su
h as the volumetri
ow rate and the density. For the notation,
we use s as a generi
index for system meaning that the equations really apply
to any of them, namely the two feed tanks, the mixing tank and the produ
t
tank.
The transfer introdu
es the
on
entration. Con
entration is dened by :
n
cS := S , (9.38)
VS
Introdu
ing volume, whi
h is a fun
tion of the
omponent mass, the basi
state:
nS
VS := , (9.39)
ρS
ρS :=
onst . (9.40)
The density is the molar density and assumed
onstant and known. The total
molar mass is obtained as s
alar produ
t of a one-ve
tor with the molar masses
in the system:
nS := eT nS , (9.41)
T
e := [1, 1, . . . , 1] , (9.42)
whi
h
ompletes the this se
tion.
9.1. PROCESSES 175
9.1.3.1.5 Manipulations
We
ould have used the fa
t of the density to be
onstant earlier. But for the
purpose of demonstrating on how it enters the
al
ulation, we kept it in so
far and it is only now that the assumption is being used to nd the simplied
equations for the
hange in the volume:
dVS T dnS
:= ρ−1
S e (9.48)
dt dt
This leads to further simpli
ations:
dVS X
= ρ−1
S αm km V̂m eT cm (9.49)
dt
∀m
X X
−1
= ρS αm km V̂m ρm = αm km V̂m (9.50)
∀m ∀m
!
dcS X X
= VS−1 αm km V̂m cm − cS αm km V̂m (9.51)
dt
∀m ∀m
!
X
= VS−1 αm km V̂m (cm − cS ) (9.52)
∀m
dVa
= −ka|c V̂a|c , (9.53)
dt
dca
= 0, (9.54)
dt
dVb
= −kb|c V̂b|c , (9.55)
dt
dcb
= 0, (9.56)
dt
dVc
= ka|c V̂a|c + kb|c V̂b|c − kc|d V̂c|d , (9.57)
dt
dcc
= Vc−1 ka|c V̂a|c (ca − cc ) + kb|c V̂b|c (cb − cc ) , (9.58)
dt
dVd
= kc|d V̂c|d , (9.59)
dt
dcd
= Vd−1 kc|d V̂c|d (cc − cd ) . (9.60)
dt
V
S
• state ve
tor xS := , S ∈ {a, b, c, d}
cS
ka|c
• input ve
tor u :=
kb|c
kc|d
• parameters: there are no real parameters. The distin
tion between pa-
rameters and
onditions is though not quite sharp. We use the rule that
if it is a state that is known than it is a
ondition.
9.1. PROCESSES 177
Remains to use these denitions and rewrite the equations in this new notation:
ẋa −u1 γ3
ẋa
0
ẋb
−u2 γ4
ẋb 0
(9.61)
= .
ẋc
u 1 γ3 + u 2 γ4 − u 3 γ5
x−1 (u γ (x − ) + u γ (x − ))
ẋc c 1 3 a xc 2 4 b xc
ẋd
u 3 γ5
ẋd xd−1 (u3 γ5 (xc − xd ))
178 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
The pro
ess stru
ture is essentially identi
al to the generi
mixing plant dis-
ussed above. Thus we refer to gure Figure 9.7. The obvious dieren
e is that
we now have a rea
tion in the
ontents of the rea
tor going on, whi
h is:
H2 CO3 ↔ H + + HCO3− (9.62)
HCO3− ↔H ++
CO32− (9.63)
H2 O ↔ H + OH+ −
(9.64)
N aOH ↔ N a + OH + −
(9.65)
N aHCO3 ↔ N a + HCO3−
+
(9.66)
N a2 CO3 ↔ 2 N a+ + CO3−2 (9.67)
Assuming that we operate with diluted solutions, the energy household is not
of interest and we only need to fo
us on the
omponent mass balan
es to get a
reasonable des
ription of the pro
ess. Let the system A be the feed tank a and
system B the feed tank B whilst the rea
tor
ontents we label with R and the
produ
t tank with D
ṅR := n̂A|R + n̂B|R − n̂R|P + VR NT η̃ R . (9.68)
The transport equations are not needed as one assumes knowing the volumetri
ows in ea
h of the streams. Thus the representation of the
omponent ows
redu
es to a simple transformation:
n̂a|b := ca|b V̂a|b a|b ∈ {A|R, B|R, R|P } . (9.69)
Sin
e the ows
an reasonably be assumed unidire
tional, the intensive property
of the stream is the one of the sour
e, thus ca|b = ca .
The rea
tion
onsists of disso
iation rea
itions for the hydro
arbonate in two
stages and the sodium hydroxide. The se
ond disso
iation is nearly
omplete,
thus one
ould
onsider ignoring it in the set of equilibrium rea
tions.
9.1. PROCESSES 179
Little is needed here. The main one is the link between the molar
omposition
and the
omponent mass:
c := V −1 n , (9.70)
V := ρ n , (9.71)
n := eT n , (9.72)
y := − log(c4 ) . (9.73)
where the
on
entration c4 is the
on
entration of the 4th spe
ies, whi
h is the
proton H + .
Assuming the density is
onstant as the solvent dominates, the above equations
are
omplete.
180 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
To eliminate the unknown fast rea
tion rates from the balan
e equations, the left
null-matrix of the transposed stoi
hiometri
matrix must be
omputed, whi
h
is:
0 0 0 0 1 1 1 0 0 0
0 −1 −2 1 −1 0 0 1 0 0
Ω :=
2 1 0 1 0 1 0 0 1 0
−1 0 1 −1 0 −1 0 0 0 1
The rank of Ω is 4. Thus of the 10 spe
ies, 4
ome from the dynami
balan
es
and the rest
ome from the six equilibrium relations that
omplete the model
equations:
0 := k(c) . (9.74)
- T (x, t) -
The example we use is a
ondu
tive heat transfer through a wall. The Figure 9.8
shows the simplest possible arrangement in whi
h two
apa
ities are
ouple by
9.1. PROCESSES 181
a distributed heat transfer system, for example a solid wall separating two uid
bodies. The energy balan
e
an be transformed to the well known heat diusion
equation of Fourier.
Let T (x, t) be the temperature as a fun
tion of the spatial
o-ordinate x and the
time t, then the temperature prole is obtained by integrating the heat diusion
equations:
∂T (x, t) ∂ 2 T (x, t)
:= α . (9.76)
∂t ∂x2
Assuming a very fast transfer, whereby very fast is to be taken relative to the
dynami
s of the atta
hed systems, then one may assume that the adjustment of
the temperature prole in the distributed transfer system is rea
hing the equi-
librium state o
urs instantaneously. The left-hand side is then zero and the
prole is readily
omputed as linear between the temperatures of the boundaries,
whi
h are the temperature of the two guarding systems. The state of the sim-
plied system is thus eliminated from the dynami
s and may be re
onstru
ted
from the states of the two atta
hed systems.
182 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
Ayz
Ayz
w
z y
x
l
Figure 9.9: A slap of some heat
ondu
tive material, being heated
on one side and
ooled on the other, loosing heat through the top
only. The sides and the bottom are thus ideally insulated
L S R
q̂L|S q̂S|R
√ √ √
cosh( ξl) sinh( ξx) p sinh( ξx)
T (x) := − √ + cosh( ξx) ∆TL + √ ∆TR ,
sinh( ξl) sinh( ξl)
(9.82)
In the next step, the heat loss is
omputed by integrating the heat loss equation
over the surfa
e whi
h is of width ay , that is one assumes the other side is
insulated:
Z l
q̂S|E := q̂S|E (x)dx , (9.83)
0
Z l
:= (−w ay ) (TE − T (x))dx , (9.84)
0 √ √
−1 + e ξl ay w −1 + e ξl ay w
:= √ √ ∆TL + √ √ ∆TR , (9.85)
ξ e ξl + 1 ξ e ξl + 1
∆TL := TL − TE , (9.86)
∆TR := TR − TE , (9.87)
184 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
q̂S|E
L S R
q̂L|S q̂S|R
Figure 9.11: Assuming a zero
apa
ity ee
t of the slap yields a
kind of heat-splitter.
The three heat streams, namely q̂S|E , q̂L|S and q̂S|R sum up to zero, as
an be
shown with a little of tedious
al
ulations. All three streams
an be represented
as a fun
tion of the two temperature dieren
es, thus also depend on all three
temperatures. This result says, that the paradigm of
onne
tions only being
dened between two elementary systems
annot be retained if one insists of
eliminating systems with side streams by the means of steady state or negligible
apa
ity assumption. Two
onsequen
es
an be drawn from this result,
1. Either the redu
tion to a zero
apa
ity is not allowed for systems with side
streams or
2. a skeleton of the transfer system must be retained in the topology in whi
h
the streams meet and sum to zero. Any rea
tion only adds a term but does
not ae
t the result stru
turally.
9.1. PROCESSES 185
The pro ess is sket hed qui kly Figure 9.12 However, depending on the time
steak
marinade
marinade r steak
marinade r steak
r marinade r steak
We assume the steak to be of uniform thi
kness dS and the marinade being of
depth dM . Further, we introdu
e a
o-ordinates system, whereby the problem
is
onsidered one-dimensional. The
o-ordinate is labelled with r.
9.1.7.2.1 Case 3
Labelling the marinade with subs
ript M and the steak with S the behaviour
for
ase 3 is given by:
ṅM = −n̂M|S , (9.92)
∂cS ∂ 2 µS
= KS , (9.93)
∂t ∂r2
eq BC µM (−ǫ) = µS (+ǫ) , (9.94)
ow BC n̂M|S (−ǫ) = n̂M|S (+ǫ) , (9.95)
ow BC n̂M|S (dS ) = 0 . (9.96)
9.1.7.2.2 Case 4
∂cM ∂ 2 µM
= KM , (9.97)
∂t ∂r2
∂cS ∂ 2 µS
= KS , (9.98)
∂t ∂r2
eq BC µM (−ǫ) = µS (+ǫ) , (9.99)
ow BC n̂M|S (−ǫ) = n̂M|S (+ǫ) , (9.100)
ow BC n̂M|S (−dM ) = 0 , (9.101)
ow BC n̂M|S (dS ) = 0 . (9.102)
µ := µo + R T ln x . (9.104)
This introdu
es the mole fra
tions, whi
h need to be the result of mapping the
omponent mass:
x := n−1 n . (9.105)
And
n := eT n . (9.106)
c := V −1 n , (9.107)
V := ρ −1
n. (9.108)
There is no rea
tion taking pla
e and the temperature is assumed to be
on-
stant. With the
hemi
al potentials at normal
onditions being given, the set
of transformations is
omplete.
The partial dierential equations are being dis
retised in the spatial
o-ordinate
using a 3-point approximation and using the index k for the points on the regular
grid of width ∆r (Se
tion 3.1). The dis
retisations for
ase 4 is introdu
ing the
indexing s
heme 0,1,2,. . . , n, n+1,. . . , n+m with point 0 representing the outer
surfa
e of the marinade (no ow
ondition), n representing the boundary to the
steak, and n+m the outer surfa
e of the steak (no ow
ondition). Obviously
for
ase 3 this simplies by having n = 0.
For the internal points, we thus write:
∂ 2 µM µk−1 − 2 µk + µk+1
2 := . (9.109)
∂r (∆ r)2
k
9.1. PROCESSES 189
∂ 2 µM µ − 2 µ1 + µ2
2 := 0 , (9.110)
∂r (∆ r)2
0
∂ 2 µM µ − 2 µn+m−1 + µn+m
2
:= n+m−2 . (9.111)
∂r (∆ r)2
n+m
µ1 − µ0
n̂M|S (−dM ) := := 0 , (9.112)
∆r
µn+m − µn+m−1
n̂M|S (dS ) := := 0 . (9.113)
∆r
∂ 2 µM µ−1 − 2 µ0 + µ1
2 := , (9.114)
∂r (∆ r)2
0
∂ 2 µM µ − 2 µn+m + µn+m+1
2
:= n+m−1 , (9.115)
∂r (∆ r)2
n+m
With the symmetry µ−1 = µ1 and µn+m−1 = µn+m+1 the two expressions
simplify to:
∂ 2 µM −2 µ0 + 2 µ1
2 := , (9.116)
∂r (∆ r)2
0
∂ 2 µM 2 µn+m−1 − 2 µn+m
2
:= , (9.117)
∂r (∆ r)2
n+m
9.1.7.6.1 Case 3
µ1 − µ0
ṅ0 := −KS A . (9.118)
∆r
190 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
−2M M µ1 µ0
ċ 1
−2M µ
M M 0
2
ċ2 . ..
.. .. ..
..
. :=
. . . + .
,
.
.
M −2M µm−1
M
0
ċm
2M −2M µm 0
,
(9.119)
with M := ∆r −2 KS .
9.1.7.6.2 Case 4
For
ase 4 we get rst have sort out the boundary between the two phases by
omputing the missing µn from the boundary
ondition:
with R := ∆rS
∆rM K−1
S
KM . Whi
h gives:
−1
µn := R + I µn−1 + R µn+1 . (9.121)
This then substituted into the expressions for the approximations for the two
points left and right the boundary:
∂ 2 µ µn−2 − 2 µn−1 + µn
:= , (9.122)
∂r2 ∆r2
n−1
−1
µn−2 − 2 µn−1 + R + I µn−1 + R µn+1
:= , (9.123)
∆r2
−1 −1
µn−2 + R+I − 2 I µn−1 + R + I R µn+1
:= ,
∆r2
(9.124)
9.1. PROCESSES 191
and
∂ 2 µ µn − 2 µn+1 + µn+2
:= , (9.125)
∂r2 ∆r2
n+1
−1
R+I µn−1 + R µn+1 − 2 µn+1 + µn+2
:= 2
, (9.126)
−1 ∆r
−1
R+I µn−1 + R + I R µn+1 − 2 I µn+1 + µn+2
:= ,
∆r2
(9.127)
ċ := L µ , (9.128)
192 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
µ0
ċ0
µ1
ċ1 ..
..
.
.
µ
n−2
ċn−2
µn−1
with and
ċ :=
ċn−1 ,
µ :=
,
µ
n+1
ċn+1
µ
n+2
ċn+2 ..
..
.
.
µ
m+n−1
ċn+m
µm+n
−2M 2M
−2M
M M
.. .. ..
. . .
M −2M M
M MQ MQ
L := 1 2 .
SQ SQ S
2 1
−2S
S S
.. .. ..
. . .
S −2S S
2S −2S
Where
M := ∆r −2
M KM ,
S := ∆r −2
S KS ,
−1
Q := R + I − 2I,
1
−1
Q := R + I R.
2
Figure 9.17 and Figure 9.18 show some results from simulations.
9.1. PROCESSES 193
0.9
0.8
0.7
0.6
concentration
0.5
0.4
0.3
0.2
0.1
0
0 20 40 60 80 100 120 140 160 180 200
marinade −−−−−− position −−−−−−−−−− steak
0.9
0.8
0.7
0.6
concentration
0.5
0.4
0.3
0.2
0.1
m 0
ar
ina
de 0
−−
−−
−− 50
po
sit
ion 100
−− 0
−− 10
−− 150 20
−− 30
−−
ste 200 40
ak 50
time
∂ 2 T (z, x) ∂ 2 T (z, x)
∂T (z, x) k ay w
= + − (T (0, x) − TR ) .
∂t ρ cp ∂x2 ∂z 2 ρ cp Ayz
(9.129)
Dis
retisation is done in the two
o-ordinates z and x. For simpli
ity we use
the same dis
retisation quantity in both dire
tion,
all it h. Using the index j
for the x
o-ordinate and the index i for the z
o-ordinate, the grid as shown
in gure Figure 9.19. The empty dots represent the internal points, whi
h
in
ludes the two exposed boundaries: on the bottom being insulated on the top
exposed to the air in the room. The two boundaries left and right are driving
the pro
ess and are assumed to be given as
onditions. Note that the order
of the spatial
o-ordinates. Based on the grid one
an propose dierent nite
dieren
e approximations. Here we use the most
ommon, but also the most
simple 3-point nite dieren
e approximation.
0 1 2 3 4 5 6 7 8 9 10 11 n+1
0
z 2
m+1
Figure 9.19: Grid arrangment for the 2-D
ase study of the
ooling
n
The dis
rete model using the notation Tindex in z
o-ordinate, index in x
o-ordiante
9.1. PROCESSES 195
Where:
k −2
α := h (9.133)
ρ cp
ay w
β := h (9.134)
ρ cp Ayz
(9.135)
Next step is to
ast these equation into a matrix state-spa
e form. The state is
the temperature of the internal node points and the input is the points in the
left and the right boundary and the room temperature.
All of the matri
es are sparse and ree
t the regular patterns of the grid and
the weights of the nite dieren
e approximation. The matri
es Ab and Bb are
196 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
01 02 03 04 11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44
01 −4 +1 +2
02 +1 −4 +1 +2
03 +1 −4 +1 +2
04 +1 −4 +2
11 +1 −4 +1 +1
12 +1 +1 −4 +1 +1
13 +1 +1 −4 +1 +1
14 +1 +1 −4 +1
21 +1 −4 +1 +1
22 +1 +1 −4 +1 +1
23 +1 +1 −4 +1 +1
24 +1 +1 −4 +1
31 +1 −4 +1 +1
32 +1 +1 −4 +1 +1
33 +1 +1 −4 +1 +1
34 +1 +1 −4 +1
41 +2 −4 +1
42 +2 +1 −4 +1
43 +2 +1 −4 +1
44 +2 +1 −4
01 02 03 04 11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44
01 +1
02 +1
03 +1
04 +1
00 10 20 30 40 05 15 25 35 45 R
01 +1
02
03
04 +1
11 +1
12
13
14 +1
21 +1
22
23
24 +1
31 +1
32
33
34 +1
41 +1
42
43
44 +1
00 10 20 30 40 05 15 25 35 45 R
01 −1
02 −1
03 −1
04 −1
Figure 9.20: The four matri
es for the n 2-D dis
rete represenation
9.1. PROCESSES 197
Figure 9.20 shows the four matri
es: As , bb , Bs , bb for the
ase where n := 3
and m := 4. The pro
ess is driven by the temperatures at the left and the right
boundaries being the grid points i := 0, . . . , 4 , j := 0 and i := 0, . . . , 4 , j :=
5 and the room temperature labelled with R.
198 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
110
100 100
90
80
80
70 60
60
50 40
40
20
30
20 0
20 20
15 50 15 50
40 40
10 10
30 30
5 20 5 20
time: 00 0 0
10
0 0
10
110
100 100
90
80
80
70 60
60
50 40
40
20
30
20 0
20 20
15 50 15 50
40 40
10 10
30 30
5 20 5 20
time: 10 0 0
10
0 0
10
110
100 100
90
80
80
70 60
60
50 40
40
20
30
20 0
20 20
15 50 15 50
40 40
10 10
30 30
5 20 5 20
time: 20 0 0
10
0 0
10
110
100 100
90
80
80
70 60
60
50 40
40
20
30
20 0
20 20
15 50 15 50
40 40
10 10
30 30
5 20 5 20
time: 40 0 0
10
0 0
10
110
100 100
90
80
80
70 60
60
50 40
40
20
30
20 0
20 20
15 50 15 50
40 40
10 10
30 30
5 20 5 20
time: 80 0 0
10
0 0
10
Figure 9.21: 2-D n dynami
s: left
old (50), right hot (100), front
room (75 | 0)
9.1. PROCESSES 199
The obje
tive is to model a dynami
ash and then gradually apply simplifying
assumptions.
It seems reasonable to assume the two phases to be well mixed in the bulk and
only exhibiting
hanges in the intensities in the neighbourhood of the bound-
ary, if one assumes the pro
ess to be ideally insulated. For the lo
al
hanges
on may assume a lm theory model with no
apa
ity ee
ts, thus a Nernst
approximation for the lm.
boundary
L lm lm G
L G
9.1.9.2.1 Balan es
9.1.9.2.2 Transport
9.1.9.2.3 Transformations
We assume that for the liquid phase and for the gas phase we have a model in
the form of an energy fun
tion. The model is given as a Helmholtz surfa
e:
As := As (Ts , Vs , ns ) , (9.157)
and for the liquid phase we also provide the density as a fun
tion of the tem-
perature:
ρL := ρ(TL ) , (9.158)
and
VL := eT nL /ρL . (9.159)
and
∂As
µs := . (9.162)
∂ns
The transformations must link the se
ondary state variables being used for the
transport equations to the primary state, namely the
omponent mass and the
internal energy of the two
apa
ities L and G. The sequen
e the is:
9.1.9.3 Manipulations
202 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
9.1.9.3.1 Boundary
The boundary has no
apa
ity, thus the a
umulation terms in the respe
tive
balan
e equations are zero. Consequently the boundary has no state and thus no
intensive properties
an be derived from state transformations. The intensive
properties at the boundaries are the lo
al ones at the limit of the respe
tive
phase. They must be extra
ted from the stationary balan
es, whi
h imply:
From these two equations the
hem
i
al potential at the boundary and the tem-
perature is extra
ted. For the
hemi
al potential the solution is found qui
kly
after substituting the transfer laws:
−1
µB := KnL|B + KnB|G , KnL|B µL + KnB|G µG . (9.165)
Solving the energy balan
e for the temperature
annot readily be done analyti-
ally if the partial molar enthalpies are a nonlinear fun
tion of the temperature.
If the
onstant-pressure spe
i
heat
apa
ities
an be assumed
onstant and
all involved transport
oe
ients are
onstant, then the problem is linear in the
boundary temperature and
an thus be readily solved.
The pro
ess des
ribes for example a mixing pro
ess. It has two internal
y
les.
A inow drives the system. A
onstant volume assumption is made for ea
h
individual part, thus also for the overall pro
ess.
V̂c|f
i h g e
V̂h|i V̂g|h V̂e|g
Figure 9.24: A seamingly
omplex mixing pro
ess with the framed
quantities being given, besides the feed
omposition. The boxed
volumetri
ows are given and so is the
ompostion on the inow
to the plant. All ows are assumed to be unidire
tional, thus ow
dire
tion does not
hange during the pro
ess' operation.
The overall node set is split into internal and external nodes:
S := {a, b, c, d, e, f, g, h, i, k, l} (9.166)
Si := {a, b, c, d, e, f, g, h, i} (9.167)
Se := {k, l} (9.168)
Similarly the stream set splits into a internal and an external stream set:
M := {a|b, b|c, c|d, d|e, e|g, c|f, f |g, g|h, h|i, i|a, k|a, d|l} (9.169)
Mi := {a|b, b|c, c|d, d|e, e|g, c|f, f |g, g|h, h|i, i|a} (9.170)
Me := {k|a, d|l} (9.171)
9.1.10.2 Model
All ve
tors and matri
es are blo
k ve
tors and blo
k matri
es. In this
ase, the
ow matrix is the Krone
ker produ
t
F̄ := F ⊗ I (9.173)
a|b b|c c|d d|e e|g c|f f |g g|h h|i i|a k|a d|l
a −1 +1 +1
b +1 −1
c +1 −1 −1
d +1 −1
e +1 −1 −1
Fo :=
f +1 −1
g +1 +1 −1
h +1 −1
i +1 −1
k −1
l +1
(9.174)
ms := eT ns (9.175)
:= ρ Vs (9.176)
!
ρ:=
onstant (9.177)
ṁs := ρ V̇s (9.178)
9.1. PROCESSES 205
The last is found by simple by inspe
tion2 . Thus for the arbitrary
ompartment
s one gets
ṁs := Fs m̂ , (9.184)
ρ ṁs := Fs ρ m̂ , (9.185)
V̇s := Fs V̂ . (9.186)
0 := Fs V̂ , (9.188)
0 := F V̂ (9.189)
For the
al
ulation of the unknown ow rates, we dene the two sele
tion ma-
trixes, the rst sele
ting the known volumetri
ows being Sk , whilst the un-
known ones Sn . the two matri
es are used to split the
onstant volume equation
into two parts:
Dening
Bv := F STk (9.191)
Av := F STn (9.192)
V̂k := Sk V̂ (9.193)
V̂n := Sn V̂ (9.194)
V̂n := −A−1
v
Bv V̂k . (9.195)
ṁs := Fs m̂ = 0 , (9.196)
eT ns =
onstant . (9.197)
This in turn implies that the state for ea
h
ompartment is redu
ed by one !
The
omponent mass ve
tors and the
onsequently derived
omposition ve
tors
are redu
ed by one dimension. For a tra
er-solvent system this implies that one
only requires the equations for the tra
er making all states s
alar.
c := V −1 n , (9.198)
The property of the stream, the intensities, are given by the physi
al sour
e of
the stream, thus depend on the physi
al dire
tion of the ow. If the stream
hanges dire
tion, then the properties swit
h too. Assuming that the ows are
uni-dire
tional, thus do not
hange as part of the pro
ess operation, the physi
al
sour
e of the ows is xed and
an be extra
ted from the graph, thus from the
in
iden
e matrix.
For this purpose the in
iden
e matrix is split into three parts rst, namely the
subnetwork for the internal streams and nodes only, the part asso
iated with
the inows and the part asso
iated with the outows: The partition is a
hieved
by row sele
tion. Let
The matrix
" ( #
−fm,s ; if fm,s = −1
P := pm,s := (9.199)
0 ; otherwise
m:=i+o,s:=r
P̄ := P ⊗ I . (9.200)
x := c̄ (9.202)
u := c̄f (9.203)
y := x (9.204)
A := V̄
−1 ¯
F̄r,i+o V̂ P̄ (9.205)
i+o
B := V̄
−1 ¯
Fr,f V̂ (9.206)
f
C := I (9.207)
D := 0 (9.208)
Assuming that the ow in the left
y
le is mu
h larger than the one in the right
and the ow through the pro
ess, the ows Md := {a|b, b|c, c|f, f |g, g|h, h|i, i|a}
are being eliminated. For this purpose we split the in
iden
e matrix for the
pro
ess into two parts, one that en
loses the subnet formed by the fast ows,
denoted by Fd , and the remainder, denoted by Fr .
¯ := F̄ n̂
ṅ f
¯ + F̄ n̂
r
¯ (9.209)
:= Ff ⊗ I n̂ ¯ + F̄ n̂
r
¯ (9.210)
V̂d|c
V̂e|z
e
To eliminate the fast ows, we seek multiply with a matrix Ω̄
hosen su
h that
the produ
t
Ω̄ F̄f =: 0 . (9.211)
Ω Ff =: 0 , (9.212)
Ω̄ := Ω ⊗ I , (9.213)
208 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
From the graphi
al representation it is easy to see that the ω simply adds those
nodes that are
onne
ted by the fast ows. In our example this results the gure
Figure 9.25.
Relabelling the graph and the same approa
h as dis
ussed above is applied to
obtain the ABCD representation.
0.5 0.5
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0.5 0.5
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
−3 compare average of lumped and recuced model −3 compare average of lumped and recuced model
x 10 x 10
1.5 1.5
0.5 0.5
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
9.2 Theory
9.2.1 Dierential Balan
e (Shell Balan
e)
9.2.1.1 Problem Denition
9.2.1.2 Solution
Let r denote the spatial
o-ordinate then we draw up the balan
e for a small
volume
hara
terised by the
o-ordinate r and r + ∆r where ∆ denotes a small
in
rement in r and the area at the lo
ation A then the balan
e reads:
dx
:= A δx̂|r − A δx̂|r+∆r . (9.215)
dt
Expanding the
hange in the ux as a Taylor series, being trun
ated after the
linear term:
∂δx̂
δx̂|r+∆r := δx̂|r + ∆r , (9.216)
∂r
one nds for the balan
e:
dx ∂δx̂
:= A δx̂|r − A δx̂|r + ∆r , (9.217)
dt ∂r
∂δx̂
:= −A ∆r , (9.218)
∂r
∂δx̂
:= − ∆V , (9.219)
∂r
dx/∆V ∂δx̂
:= − . (9.220)
dt ∂r
Letting the volume approa
h zero by letting the ∆r go to zero in the limit:
dx/∆V ∂δx̂
lim := − , (9.221)
∆V →0 dt ∂r
one nds:
∂δx(r, t) ∂δx̂ ∂π
:= − . (9.222)
∂t ∂π ∂r
So far nothing has been said about the transfer law. Taking the simplest one,
namely Equation (2.25) the left-hand-side be
omes:
∂δx ∂ ∂π
:= λ , (9.223)
∂t ∂r ∂r
2
∂ π
:= λ 2 . (9.224)
∂r
9.2. THEORY 211
Thus we get:
∂δx(r, t) ∂2π
:= λ 2 . (9.225)
∂t ∂r
The time derivative of
onserved extensive quantity, normed with the volume,
thus the respe
tive density, is proportional to the se
ond derivative of the driving
for
e.
To go on: take for the
onserved quantity the enthalpy H and for the transferred
quantity heat, then π := T and
onsequently the transfer parameter λ is the
heat
ondu
tivity of the des
ribed material. Assuming the spe
i
heat and the
density is
onstant, the right-hand-side be
omes:
∂T ∂2T
ρ cp := λ 2 . (9.226)
∂t ∂r
Whi
h after re-arrangement be
omes:
∂T λ ∂2T
:= , (9.227)
∂t ρ cp ∂r2
∂2T
:= α . (9.228)
∂r2
where α the heat diusivity.
212 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
The single input is mapped onto the state with the ve
tor b thereby taking the
rle of the matrix B and the state is mapped onto the single output by cT taking
the rle of the matrix C. The input a
ting dire
tly on the output is amplied
with the s
alar d.
The transfer fun
tion of the SISO is then:
cT |I s − A|−1 adj I s − A b + d (9.231)
g(s) :=
Making also the state s
alar yields the stru
turally simplest model one
an
generate without eliminating one or the other system matri
es. The most
ommon additional simpli
ation is the
ase where d is zero. The transfer
fun
tion then
onsists of s
alar quantites only and reads:
For stable system the a < 0, thus the time
onstant − a1 and the steady-state
gain −a
bc
are positiv.
This rst-order sytem is used in various appli
ations as a rst approximation
for a dynami
behaviour. Applying it to the des
ription of a physi
al pro
ess
requires nding two parameters, namely the steady-state gain and the time
onstant. The identi
ation experiment must ex
ite the system su
iently dy-
nami
in order to see the behaviour of the plant. Probably the most
ommon
approa
h, though not ne
essarily the best one, is to inje
t a step and extra
t
the two parameters for the plant's input and the plant's response,
alled step
response.
The tting is most
ommonly done manually, meaning on a graph showing the
step input and the plant's response. How one
an nd the two parameters from
the response is easy to nd from an analysis of the analyti
al solution.
The solution in the time domain is:
Z t
y(t) := c exp {a t} x(o) + c exp {a θ} b u(t − θ) dθ (9.234)
0
214 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
With u(t) being a step, and assuming that the plant is at zero state initially,
the expression simplies to
cb
y(t) := exp {a θ}|t0 u0 (9.235)
a
cb
y(t) := (exp {a t} − 1) u0 (9.236)
a
For stable plants, that is a < 0 the steady state gain is thus:
cb
k := (9.237)
−a
cb
:= (9.238)
|a|
The two asymtodes (tangent at zero and the tangent to the steady state at
innity) to the step response are thus:
cb 0
v(t) := u (9.241)
|a|
w(t) := c b u0 t (9.242)
whi
h is the time
onstant τ . Interesting is also how far the pro
ess has
ome
after n times the time
onstant:
cb 1
y (n τ ) := exp a n − 1 u0 (9.245)
a |a|
cb
:= (1 − exp {− n}) u0 (9.246)
|a|
For
signal w(t)
c b u0
v(t) y(t)
u(t)
63 %
c b uo
uo |a|
time
1
τ := |a|
Figure 9.29: Step response for the rst-order s
alar SISO with the
time
onstant and the gain
216 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
Bibliography
R A Alberty. Legendre transfors in
hemi
al thermodynami
s. Chemi
al Reviews, 94
(6):14571482, 1994. 28, 29
L Apostel. Towards the formal study of models in the non-formal s
ien
es from the
on
ept and the role of the model in mathemati
s and natural and so
ial s
ien
es.
In H Freudenthal, editor, The
on
ept and the role of the model in mathemati
s and
natural and so
ial s
ien
es. D.Reidel Publishing Company, Dordre
ht, The Nether-
lands, 1960. 21
R Aris. Mathemati
al modelling te
hniques. Pitman, London, 1978. 21
K J Astroem and P Eykho. System identi
ation a survey. Automati
a, 7:123162,
1971. 107, 110, 122
Kendall E Atkinson. Numeri
al analysis. Wiley, 1989. 67
R B Bird, W E Stewart, and E N Lightfood. Transport Phenomena. Wiley, London,
2001. 25, 41
G E P Box and G C Tiao. Bayesian inferen
e in statisti
al analysis. Addison Wesley,
1973. 123
G E P Box, W G Hunter, and Hunter J S. Statisti
s for experiments An introdu
tion
to design, data analysis, and model building. John Wiley, New York, 1978. 132, 133
Chi-Tsong Chen. Linear System Theory and Design. Holt, Rinehart and Winston,
In
, 1984. 71
R Courant, Friedri
ks, and Lewy. Existen
e of solutions for wave equations. Mathe-
matis
he Annalen, 100:3274, 1928. 43
W M Deen. Analysis of Transport Phenomena. Topi
s in Chemi
al Engineering. Oxford
University Press, New York, Oxford, 1998. Ni
e derivation of the
onservation of
extensive quantity for a volume in a ow eld. 29, 41
K Denbigh and J C R Turner. Chemi
al Rea
tor TheoryAn Introdu
tion. Cambridge
University Press, Cambridge, UK, 2nd edition, 1971. 26
Institut des S
ien
es Cognitives. English thesaurus. URL
http://di
o.is
.
nrs.fr/en/index.html . 144
P Eykho. System Identi
ation. Wiley, New York, 1974. 107, 110, 131
G C Goodwin and R L Payne. Dynami
system identi
ation: Experiment design and
data analysis. A
ademi
Press, 1977. 112
S R Groot and P Mazur. Non-Equilibrium Terhmodynami
s. Dover Publi
ations In
,
New Yori, 1983. 29
E A Guggenheim. Thermodynami
sAn advan
ed Treatment for Chemists and Physi-
ists. North-Holland Publishing Company, Amsterdam, The Netherlands, 5th edi-
tion, 1967. TUE EE library GCB 67 GUG. 29
F B Hildebrand. Introdu
tion to numeri
al analysis. M
Graw-Hill, New York, 1956.
67
A H Jazwinski. Sto
hasti
Pro
esses and Filter Theory. A
ademi
Press, New York,
1970. 128
Ja
k Johnston and John DiNardo. E
onometri
methods. M
Graw Hill, 1997. 123
217
218 BIBLIOGRAPHY
219