Anda di halaman 1dari 219

Le

ture Notes
in

Chemi al Pro ess Systems Engineering

Heinz A Preisig
Chemi al Engineering
Norwegian University of S ien e and Te hnology (NTNU)
7491 Trondheim, Norway

Heinz.Preisig hemeng.ntnu.no

Version: 03.00 - 2010-01


Printed: 2011-1-25
2
Contents

1 Synopsis 13

2 Mapping the World 17

2.1 Mapping Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


2.1.1 Ma ros opi Models . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Abstra tion: Physi al Topology . . . . . . . . . . . . . . . 19
2.1.3 Basi Building Blo ks . . . . . . . . . . . . . . . . . . . . 20
2.1.4 Time S ales . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 The Nature of the Elementary Systems . . . . . . . . . . . . . . 22
2.2.1 System Dynami s: Conservation of Extensive Quantities 22
2.2.2 Integral Balan e . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.3 Dierential Balan e . . . . . . . . . . . . . . . . . . . . . 25
2.3 Internal Dynami s . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Extensive Quantity Transport . . . . . . . . . . . . . . . . . . . 27
2.4.1 Basi Transfer Laws . . . . . . . . . . . . . . . . . . . . . 27
2.4.1.1 Continuity Condition . . . . . . . . . . . . . . . 28
2.4.2 Phase Boundaries . . . . . . . . . . . . . . . . . . . . . . 30
2.4.2.1 Flux Condition . . . . . . . . . . . . . . . . . . 30
2.4.2.2 Jump Condition . . . . . . . . . . . . . . . . . . 31
2.5 Variables: Nature, Role and Transformations . . . . . . . . . . . 33
2.5.1 A Proper State-Spa e Representation . . . . . . . . . . . 33
2.5.2 The Text Book Representation . . . . . . . . . . . . . . 36
2.5.2.1 Example: Heating a Lump of Material . . . . . 37
2.5.3 State Representations are not Unique . . . . . . . . . . . 37

3
4 CONTENTS

2.5.4 Minimal and Non-Minimal Representations . . . . . . . 38


2.6 Se ondary Assumptions . . . . . . . . . . . . . . . . . . . . . . . 38
2.6.1 Simplied Primitive Systems . . . . . . . . . . . . . . . . 40
2.6.1.1 Lumpy Boundaries . . . . . . . . . . . . . . . 40
2.6.1.2 Two Extreme Systems . . . . . . . . . . . . . . 41
2.6.1.2.1 Minimal Internal Re y le . . . . . . . . 41
2.6.1.2.2 Maximal Internal Re y le . . . . . . . . 43
2.6.2 Network Representation . . . . . . . . . . . . . . . . . . . 44
2.6.2.1 A Graph is a Very Ri h Representation . . . . . 44
2.6.2.1.1 Assumed Nature of the Containment . 44
2.6.2.1.2 Colouring in . . . . . . . . . . . . . . . 45
2.6.2.2 From Graphs to Equations . . . . . . . . . . . . 47
2.6.2.2.1 Adding Colours . . . . . . . . . . . . . 49
2.6.3 Handling Complexity . . . . . . . . . . . . . . . . . . . . 50
2.7 Three Extreme Dynami Assumptions . . . . . . . . . . . . . . . 50
2.7.1 Fast and Slow Capa ities . . . . . . . . . . . . . . . . . . 50
2.7.1.1 Outer Solution . . . . . . . . . . . . . . . . . . . 52
2.7.1.2 Inner Solution . . . . . . . . . . . . . . . . . . . 53
2.7.2 Fast Transfer . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.7.3 Fast Transpositions . . . . . . . . . . . . . . . . . . . . . . 55
2.7.4 Outer Solution for a Network of Fast and Slow Systems . 57
2.7.5 Assumptions on Assemblies . . . . . . . . . . . . . . . . . 58
2.7.6 Assumptions in the Spa e of the Se ondary States . . . . 59
2.7.7 Unmodelled Components . . . . . . . . . . . . . . . . . . 59
2.8 Link to Control-Related System Theory . . . . . . . . . . . . . 60
2.8.1 Plant and Its Environment . . . . . . . . . . . . . . . . 60
2.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.9.1 The Primary Model . . . . . . . . . . . . . . . . . . . . 61
2.9.1.1 Network of Distributed Systems . . . . . . . . 61
2.9.1.2 Transport and Internal Transposition . . . . . 62
2.9.2 Simpli ations . . . . . . . . . . . . . . . . . . . . . . . 63
CONTENTS 5

2.9.2.1 No Internal Mixing vs. Dominated by Internal


Mixing . . . . . . . . . . . . . . . . . . . . . . . 63
2.9.2.2 Order of Magnitude Assumptions . . . . . . . . 63
2.9.2.3 Modelling Patterns . . . . . . . . . . . . . . . . 63
2.9.2.3.1 Step by Step . . . . . . . . . . . . . . 63
2.9.2.3.2 Not Knowing it All . . . . . . . . . . 64

3 Approximating Distributed Systems 67

3.1 Finite Dieren e Approximation . . . . . . . . . . . . . . . . . . 67


3.1.1 Extension to Higher-Dimensional Problems . . . . . . . 70

4 System Theory's {A,B,C,D} 71

4.1 Time-Domain Representation . . . . . . . . . . . . . . . . . . . 71


4.1.1 The Standard Representation . . . . . . . . . . . . . . . 71
4.1.1.1 Time-Domain Solution . . . . . . . . . . . . . . 72
4.1.1.2 Sampled LTI-System . . . . . . . . . . . . . . . 72
4.1.1.3 Dis rete System Representation Using Shift Op-
erators . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.2 Kalman's De omposition . . . . . . . . . . . . . . . . . . 75
4.2 Getting the LTI System from the Me hanisti Model . . . . . . 76
4.3 Frequen y Domain Representation . . . . . . . . . . . . . . . . 78
4.3.1 Transfer Fun tions Are Complex . . . . . . . . . . . . . . 80
4.3.2 Polynomial Transfer Fun tions . . . . . . . . . . . . . . 80
4.3.2.1 Transfer Fun tions of Transportation Lags : Dead-
Time Elements . . . . . . . . . . . . . . . . . . . 82
4.3.2.1.1 Example: Transportation Lag . . . . . 82
4.3.2.2 Graphi al Representation of Transfer Fun tions 82
4.3.2.3 Approximation of Polynomial Transfer Fun -
tions . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3.2.3.1 A Re ipe Approa h to Visualise Ap-
proximations of Bode Plots . . . . . . 85
4.3.2.3.2 Transfer fun tion of elementary trans-
fer fun tions . . . . . . . . . . . . . . . 85
4.3.2.3.3 De ibels . . . . . . . . . . . . . . . . . 90
4.3.2.3.4 Non-minimal phase systems . . . . . . 91
6 CONTENTS

5 Stability 93

5.1 The Con ept of Stability . . . . . . . . . . . . . . . . . . . . . . 93


5.2 The Eigenvalue|Pole Argument for Linear, Time-Invariant Sys-
tems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3 Dire t Method of Liapunov . . . . . . . . . . . . . . . . . . . . 96
5.4 Stability of Linear, Continuous Systems . . . . . . . . . . . . . 97
5.4.1 Free, or Autonomous Systems . . . . . . . . . . . . . . . 97
5.4.2 Bounded Input, Bounded Output Stability (BIBO Sta-
bility) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.4.3 Time-Invariant Linear Systems . . . . . . . . . . . . . . . 99
5.4.4 The Routh Criterion . . . . . . . . . . . . . . . . . . . . 100
5.4.5 Nyquist Criterion (SISO) . . . . . . . . . . . . . . . . . 102
5.4.5.1 Caughy's Prin iple Argument . . . . . . . . . . 102
5.4.5.2 The Stability Criterion . . . . . . . . . . . . . 102
5.4.5.3 The Simplied Criterion . . . . . . . . . . . . . 103
5.4.5.4 Gain and Phase Margin . . . . . . . . . . . . . 103

6 System Identi ation 107

6.1 Mat hing the Model to the Plant . . . . . . . . . . . . . . . . . . 107


6.2 Dening System Identi ation . . . . . . . . . . . . . . . . . . . . 108
6.2.1 Consequen es . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3.1 Data-driven Models . . . . . . . . . . . . . . . . . . . . . 111
6.3.2 Spe ial Forms . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.3.2.1 Hammerstein Model . . . . . . . . . . . . . . . . 111
6.3.2.2 Wiener Model . . . . . . . . . . . . . . . . . . . 111
6.3.2.3 Stati L-i-P (Linear-in-Parameters) Models . . . 112
6.4 Point Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.4.1 Least-Squares Estimator and L-i-P Models . . . . . . . . 115
6.4.1.1 Getting the Best Parameters . . . . . . . . . . . 115
6.4.1.2 Ee t of Measurement Noise . . . . . . . . . . . 116
6.4.1.2.1 Correlation . . . . . . . . . . . . . . . . 117
6.4.1.3 Expe ted A ura y . . . . . . . . . . . . . . . . 117
CONTENTS 7

6.4.1.4 Conden e Limits for Parameters . . . . . . . . 118


6.4.1.5 How Good is the Identied Model: Varian e
Analysis . . . . . . . . . . . . . . . . . . . . . . . 119
6.4.1.5.1 Not knowing the varian e . . . . . . . . 120
6.4.1.5.2 How to pro eed . . . . . . . . . . . . . 121
6.4.1.6 Bias . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.4.1.6.1 Bias due to omitted variables . . . . . . 121
6.4.1.6.2 Bias due to orrelation in output noise 122
6.4.1.6.3 Bias due to input noise . . . . . . . . . 122
6.4.1.7 Instrumental Variables . . . . . . . . . . . . . . 122
6.4.1.7.1 Choi e of instruments . . . . . . . . . . 123
6.4.2 Maximum Likelihood Estimator . . . . . . . . . . . . . . 123
6.5 Sele ted Dynami Systems . . . . . . . . . . . . . . . . . . . . . 123
6.5.1 Auto-Regressive-eXtra-input (ARX) Model . . . . . . . . 124
6.5.2 Auto-Regressive-Moving-Average-eXtra-input (ARMAX)
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.5.3 General Transfer Fun tion Model Stru tures . . . . . . . . 127
6.6 Kalman Filter in Identi ation . . . . . . . . . . . . . . . . . . . 128
6.6.1 Extetended Kalman Filter . . . . . . . . . . . . . . . . . . 130
6.7 The Ex itation Signal . . . . . . . . . . . . . . . . . . . . . . . . 130
6.7.1 Design of Experiments . . . . . . . . . . . . . . . . . . . . 131
6.7.1.1 Single Blo k Design . . . . . . . . . . . . . . . . 132
6.7.1.2 Handling Additive Noise . . . . . . . . . . . . . 134
6.7.1.3 Redu ing Trends . . . . . . . . . . . . . . . . . . 134
6.7.1.3.1 Randomising . . . . . . . . . . . . . . . 134
6.7.1.3.2 Blo k Designs . . . . . . . . . . . . . . 134
6.7.2 Optimal Designs . . . . . . . . . . . . . . . . . . . . . . . 134

7 Appendix: Mathemati al Components 137

7.1 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


7.1.1 Matrix dierentiation . . . . . . . . . . . . . . . . . . . . 139
7.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2.1 Leibnitz' Rule . . . . . . . . . . . . . . . . . . . . . . . . 139
8 CONTENTS

7.2.2 Taylor Expansion . . . . . . . . . . . . . . . . . . . . . . 139


7.2.3 Euler's Theorem on Homogeneous Fun tions . . . . . . . 139
7.2.4 Legendre Transformation Generating New Extensive Prop-
erties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.3 Ve tor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.1 S alar Fields and Ve tor Fields . . . . . . . . . . . . . . 142
7.3.2 Dierential Operators . . . . . . . . . . . . . . . . . . . 142
7.3.2.1 Gradient Operator . . . . . . . . . . . . . . . . 142
7.3.2.2 Divergen e Operator . . . . . . . . . . . . . . . 142
7.3.2.3 Curl Operator . . . . . . . . . . . . . . . . . . 142
7.3.2.4 LaPla e Operator . . . . . . . . . . . . . . . . 143
7.3.2.5 Nabla Operator . . . . . . . . . . . . . . . . . . 143
7.3.2.6 Relations . . . . . . . . . . . . . . . . . . . . . 143
7.3.3 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.3.4 Divergen e Theorem by Gauss . . . . . . . . . . . . . . 143
7.4 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.4.1 Basi s of Graph Theory . . . . . . . . . . . . . . . . . . 144
7.5 Singular Perturbation  An Introdu tion . . . . . . . . . . . . . . 149
7.5.1 An Illustrative Example . . . . . . . . . . . . . . . . . . . 149
7.5.1.1 The Outer Solution . . . . . . . . . . . . . . . . 150
7.5.1.2 The Inner Solution . . . . . . . . . . . . . . . . . 150
7.5.1.3 Combining the Outer and the Inner Solution . . 151
7.5.1.4 Example . . . . . . . . . . . . . . . . . . . . . . 151
7.5.2 Simple form of Tihomov's Theorem . . . . . . . . . . . . 152
7.6 Index of Dierential Algebrai Equations . . . . . . . . . . . . . 153
7.7 Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.7.1 General Problem . . . . . . . . . . . . . . . . . . . . . . . 154
7.7.2 Un onstraint Optimisation . . . . . . . . . . . . . . . . . 154
7.7.2.1 One-Dimensional . . . . . . . . . . . . . . . . . . 154
7.8 Elements of Statisti s . . . . . . . . . . . . . . . . . . . . . . . . 156
7.8.1 Probabilty . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
CONTENTS 9

7.8.1.1 Axiomati Denition . . . . . . . . . . . . . . . 156


7.8.1.2 Bayes' Theorem . . . . . . . . . . . . . . . . . . 156
7.8.1.3 Distribution Measures . . . . . . . . . . . . . . . 157
7.8.1.3.1 Behaviour of Moments . . . . . . . . . 158
7.8.1.3.2 Some Follow-Ups . . . . . . . . . . . . . 159
7.8.2 Most Common Distribution Fun tions . . . . . . . . . . . 159
7.8.2.1 Binomial Distribution . . . . . . . . . . . . . . . 159
7.8.2.2 Poisson Distribution . . . . . . . . . . . . . . . . 159
7.8.2.3 Normal Distribution . . . . . . . . . . . . . . . . 160
7.8.2.4 Exponential Distribution . . . . . . . . . . . . . 160
7.8.2.5 Uniform Distribution . . . . . . . . . . . . . . . 160
7.8.3 Essential Statisti s . . . . . . . . . . . . . . . . . . . . . . 160
7.8.3.1 Chi-Square Distribution . . . . . . . . . . . . . . 160
7.8.3.2 Student t Distribution . . . . . . . . . . . . . . . 161
7.8.3.3 F-Distribution . . . . . . . . . . . . . . . . . . . 161

8 Things to Know 163

8.1 Basi s on Rea tions . . . . . . . . . . . . . . . . . . . . . . . . . 163


8.1.1 Stoi hiometry . . . . . . . . . . . . . . . . . . . . . . . . 163

9 Examples, Exer ises, Answers 165

9.1 Pro esses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


9.1.1 Simple Pro esses . . . . . . . . . . . . . . . . . . . . . . . 165
9.1.1.1 Topology Exer ises . . . . . . . . . . . . . . . . 165
9.1.1.2 Temperature Sensor . . . . . . . . . . . . . . . . 166
9.1.2 Evaporating a Water from a Glass . . . . . . . . . . . . . 169
9.1.2.1 Problem Des ription . . . . . . . . . . . . . . . . 169
9.1.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . 169
9.1.2.2.1 The Diusion Equation . . . . . . . . . 169
9.1.2.2.2 Getting the Transfer Law: the Se ond
Time S ale . . . . . . . . . . . . . . . . 170
9.1.2.2.3 Finally the Water is Evaporating . . . . 171
9.1.3 The Mixing Plant . . . . . . . . . . . . . . . . . . . . . . 173
10 CONTENTS

9.1.3.1 Solution . . . . . . . . . . . . . . . . . . . . . . . 173


9.1.3.1.1 Behaviour: Component Mass Balan es 173
9.1.3.1.2 Transfer . . . . . . . . . . . . . . . . . . 174
9.1.3.1.3 Rea tion . . . . . . . . . . . . . . . . . 174
9.1.3.1.4 State variable transformations . . . . . 174
9.1.3.1.5 Manipulations . . . . . . . . . . . . . . 175
9.1.3.1.6 Systems Representation . . . . . . . . . 176
9.1.4 Mixing Tank with Fast Rea tion . . . . . . . . . . . . . . 178
9.1.4.1 Step 0 Abstra tion . . . . . . . . . . . . . . . . . 178
9.1.4.2 Step 1: Behaviour . . . . . . . . . . . . . . . . . 178
9.1.4.3 Step 2a: Transport . . . . . . . . . . . . . . . . . 178
9.1.4.4 Step 2b: Transposition . . . . . . . . . . . . . . 178
9.1.4.5 Step 3: Variable Transformations . . . . . . . . 179
9.1.4.6 Step 4: Conditions . . . . . . . . . . . . . . . . . 179
9.1.4.7 Step 5 : Fast Rea tions . . . . . . . . . . . . . . 180
9.1.4.8 The Redu ed Model . . . . . . . . . . . . . . . . 180
9.1.5 Example: Linear Heat Condu tor . . . . . . . . . . . . . . 180
9.1.6 The Game of Side Streams . . . . . . . . . . . . . . . . . 182
9.1.7 Marinading a Steak . . . . . . . . . . . . . . . . . . . . . 185
9.1.7.1 Step 0 Abstra tion . . . . . . . . . . . . . . . . . 185
9.1.7.2 Step 1: Behaviour . . . . . . . . . . . . . . . . . 187
9.1.7.2.1 Case 3 . . . . . . . . . . . . . . . . . . 187
9.1.7.2.2 Case 4 . . . . . . . . . . . . . . . . . . 187
9.1.7.3 Step 2a: Transport . . . . . . . . . . . . . . . . . 187
9.1.7.4 Step 3: Variable Transformation . . . . . . . . . 188
9.1.7.5 Step 4: Conditions . . . . . . . . . . . . . . . . . 188
9.1.7.6 Step 6: Maniputlations . . . . . . . . . . . . . . 188
9.1.7.6.1 Case 3 . . . . . . . . . . . . . . . . . . 189
9.1.7.6.2 Case 4 . . . . . . . . . . . . . . . . . . 190
9.1.8 2D-Heat Dissipation in a Fin . . . . . . . . . . . . . . . . 194
9.1.9 Dynami Flash . . . . . . . . . . . . . . . . . . . . . . . . 199
CONTENTS 11

9.1.9.1 A First Abstra tion . . . . . . . . . . . . . . . . 199


9.1.9.2 The Base Model . . . . . . . . . . . . . . . . . . 200
9.1.9.2.1 Balan es . . . . . . . . . . . . . . . . . 200
9.1.9.2.2 Transport . . . . . . . . . . . . . . . . . 200
9.1.9.2.3 Transformations . . . . . . . . . . . . . 200
9.1.9.3 Manipulations . . . . . . . . . . . . . . . . . . . 201
9.1.9.3.1 Boundary . . . . . . . . . . . . . . . . . 202
9.1.9.3.2 Assumption: Fast heat transfer in liquid 202
9.1.9.3.3 Assumption: Fast overall heat transfer 202
9.1.9.3.4 Assumption: Fast diusion in liquid . . 202
9.1.9.3.5 Assumption: Fast overall diusion . . . 202
9.1.9.3.6 Assumption: negligible apa ity for the
gas phase . . . . . . . . . . . . . . . . . 202
9.1.10 Multi-loop mixing and singular perturbation . . . . . . . 203
9.1.10.1 Abstra t Pro ess . . . . . . . . . . . . . . . . . . 203
9.1.10.2 Model . . . . . . . . . . . . . . . . . . . . . . . . 203
9.1.10.2.1 The balan es . . . . . . . . . . . . . . . 203
9.1.10.2.2 The onstant volume assumption . . . . 204
9.1.10.2.3 The ABCD representation . . . . . . . 206
9.1.10.2.4 Model redu tion . . . . . . . . . . . . . 207
9.1.10.3 Some simulation results . . . . . . . . . . . . . . 208
9.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.2.1 Dierential Balan e (Shell Balan e) . . . . . . . . . . . . 210
9.2.1.1 Problem Denition . . . . . . . . . . . . . . . . . 210
9.2.1.2 Solution . . . . . . . . . . . . . . . . . . . . . . . 210
9.2.1.3 An Example: Fourier's Heat Diusion Equation 211
9.2.2 Transfer Fun tions . . . . . . . . . . . . . . . . . . . . . 212
9.2.3 Basi dynami systems . . . . . . . . . . . . . . . . . . . . 213
9.2.3.1 First-Order Single-Input-Single-Output System . 213
9.2.3.1.1 S alar-State Case . . . . . . . . . . . . 213
12 CONTENTS
Chapter 1

Synopsis

Modelling is in the entre of almost any a tivity asso iated with engineering and
s ien e. It is thus not surprising that the term modelling is used in a variety of
ontexts and used for many dierent things. Here we shall refer to modelling as
the pro ess of generating a mathemati al onstru t that mimi s the behaviour
of the pie e of world being modelled. The pie e of world an nearly be anything,
a pro essing plant, any part thereof, in any detail, a living spe ies, mi robes,
green plant, a pie e of ro k, te toni plate, really anything that exists, but also
any arti ial obje t su h as an algorithm, a program, to mention just two.
The task of generating a mathemati al model may be split into three primary
domains (Figure 1.1):

1. Primary mapping of the real-world obje t of interest into a mathemat-


i al obje t. The basis for this operation is some kind of theory, whi h
is usually the subje t of a spe i dis ipline, su h as uid me hani s to
model ows, material s ien es and thermodynami s to model the material
properties, to mention just two. The result of this operation is a set of
equations, whi h, if dynami s are des ribed, are either a set of ordinary
dierential equations ombined with a set of algebrai equations or a set
of partial dierential equations ombined with a set of algebrai equa-
tions. The rst type of model is a dierential algebrai model whilst the
se ond is a partial dierential algebrai model. The rst one will be re-
ferred to as a lumped model whilst the se ond will be alled a distributed
model. The hosen stru ture represents the implementation of a rst set
of assumptions primarily onsidering time s ales and length s ales.

2. Model simpli ation: here, the model delity is adjusted. This adjust-
ment is in all ases a simpli ation. Model renement we onsider be-
ing part of the primary domain. Simpli ations are of the type to im-
plement additional time s ale and length s ale assumptions. Often they
are order-of-magnitude assumptions, whi h lead to simpli ations of the
model. Additionally pure mathemati ally motivated simpli ations may

13
14 CHAPTER 1. SYNOPSIS

primary modelling
S1,1 M1,1
T1 S1,2
M1 M1,2
S1,3 M1,3
plant
T2 model simpli ation
S2,1
experiment M2 M2,1
S2,2
M2,2
S2,3
S2,2,1
results M2,2,1
S2,2,2

solver
predi tions M2,2,2

identi ation adjust

Figure 1.1: Modelling overview: three major domains 1. primary


modelling: maps the world into a mathemati al obje t using theory
T 2. model simpli ation: simplies to mat h use of model us-
ing simpli ation S 3. model identi ation: ts model to plant
adjusting the model to minimize predi tion-result mismat h.

be introdu ed, su h as a polynomial approximation or linearization, again


just mention two ommon simpli ations.
3. The third domain ts the available free variables of the model su h that
the predi tions obtained from the model mat h the experimental results
best in the sense of a dened obje tive fun tion and a measure for the
mismat h. This is usually referred to as model identi ation or parameter
identi ation. In the rst ase the stru ture of the model may hange as
well as the respe tive parameters, whilst in the se ond ase the stru ture
is xed and only the parameters may hange.

Models ome in many dierent avours: me hanisti des riptions that are based
on the prin iples that form the foundation of s ien e, mathemati al onstru t
that apture a ertain part of the nature of a natural system. The former
is often referred to as a white box model indi ating that one an see the
me hani s of the box whilst the latter are referred to as bla k boxes, as there is
no real me hanisti thought behind the formulation of the mathemati al obje t
representing the modelled systems behaviour. Both boxes do nearly never exist
15

in a pure form, but most often one is makes use of a ombination of the two
approa hes. Often the reason is simple that one does not know enough about
the me hani of the pro ess or it is far to ompli ated for the intended use.
16 CHAPTER 1. SYNOPSIS
Chapter 2

Mapping the World into


Equations

2.1 Mapping Nature into an Abstra t Equation


Obje t
Modelling is here primarily dened as the pro ess of mapping a natural pro ess
into a set of equations that mimi s the behaviour of the mapped pro ess. This is
ommonly the rst step, whi h is best hara terised as a design pro ess be ause
the pro ess involves a number of de isions, whi h are to be taken by the person
who onstru ts this rst model. The model is an upper bound of all models
that derive from it and thus any assumption made in this rst mapping pro ess
are essential. Any assumption being hanged require the re onstru tion of the
model, or at least the part being ae ted by this assumption.
Modelling on this level is onsequently a ore subje t of any s ien e and engi-
neering edu ation. The purpose of visiting this part of the modelling pro ess
in the ontext of this exposition is that this initial model is the mother of all
derived models, thus understanding its nature and its be oming is essential for
the follow-up work. The following dis ussion also serves the purpose of dening
a terminology suitable for dis ussing the dierent aspe ts of the model manip-
ulation pro ess. At this point, we shall also onstrain our dis ussion in order to
fo us on the modelling pro ess. So in the rst instan e the domain of models
is limited by taking a ma ros opi view on nature, thus ignore the underlying
dis rete nature of matter and energy. It should be noted that this is not a
limitation, but hosen for the matter of onstraining the dis ussion.

2.1.1 Ma ros opi Models


Models of physi al pro esses, whi h we, following the tradition, will refer to as
the plant, are being generated by splitting the spatial domain relevant for its
des ription into a set of volumes, surfa es or points, whereby the spa e o upied

17
18 CHAPTER 2. MAPPING THE WORLD

by the pro ess is hosen su h that it in ludes all parts relevant for the des ription
of its behaviour in luding all those parts of the environment that intera t with
the pro ess. The arguments on how to subdivide are in parts rather subtle and
we will have to take up this subje t on several o asions.

R3
environment R2
C H
D

B G
plant A E F
R1

Figure 2.1: The pro ess-relevant part of the Universe and its dis-
se tion into ontrol volumes.

The most ommon argument for subdividing the plant and its environment is
based on pro essing units or more physi ally motivated: phases. If we take a
little distan e from plants and look more generi ally at pro esses then taking the
phase as an argument seems a natural hoi e. Obviously we then are fa ed with
the di ulty of dening the phase boundary. The ma ros opi view implies that
the world is a ontinuous system, both in time and spa e. Thus all hara terising
variables, namely the state variables, are ontinuous, but how about a phase
boundary? One ould view it as a domain in whi h the states hange very rapidly
in a small transition domain between the two adja ent phases, a thought on
whi h people like Gibbs have exer ised for a denition. Though most of us would
not spend a thought on the issue and dene the boundary as a surfa e. Reason
for pi king up this subje t here is to demonstrate that already at this very early
stage in the modelling pro ess one makes not only length s ale assumptions,
whi h eliminate the granular nature of matter, but also the taken ma ros opi
view denes dis ontinuities in the spa e when dening phases. Figure 2.1
This division pro ess is thus an abstra tion, whi h makes assumptions about
the nature of the pro ess: rst of all ontinuity of all onserved quantities with
matter, energy and momentum being the main ones (Se tion 2.4.1.1) and dis-
ontinuities of the intensive quantities dening the phase boundaries. The term
ma ros opi makes thus referen e to the length s ales being hosen. As we fo-
us on ma ros opi systems, we will primarily model systems with length s ales
being signi antly larger than the mole ular dimensions. However this is a
matter of hoi e and there here are no prin iple reasons for restri ting things to
ma ros opi systems ex ept than the appli ation domain being hosen for the
exposition here.
The relative time onstant of ea h basi system with respe t to the driving
inputs may serve as the riteria for the division, that is, the model designer
has to make a judgment of the ows and the apa itan e ee ts as well as the
relative dynami s in the ows  a subje t that needs a more thorough dis ussion,
whi h is delayed to later (Se tion 2.1.4).
Based on this view, one is lead (tempted) to abstra t the plant as is shown in the
2.1. MAPPING NATURE 19

R2 R3

G R2 R3
C G
D
C
D F
B F b2
b2 B b1 b3
A b3 E
b1 E A

R1 R1

Figure 2.2: Pulling apart makes the interfa es more visible; then
the abstra tion is extended to a set of onne ted systems.

two gures Figure 2.1 and Figure 2.2. In this pi ture the plant is divided into
a set of volumes {A,B,C,D,E,F,G}, ea h shown as a ir le, and three surfa es
{b1 ,b2 ,b3 } that ommuni ate extensive quantities, as indi ated by the arrows.
The boundary surfa es, that have no apa ity to a umulate extensive quantity,
are introdu ed as onne tions. The fa t that these onne tions have no apa ity
is essential for the understanding of the model.

2.1.2 Abstra tion: Physi al Topology


The plant with its boundary represents the physi al ontainment of the pro ess.
The abstra tion as it is shown in Figure 2.2 on the right thus represents the
physi al ontainment of the plant for whi h reason we all this representation
the physi al topology of the model. For its graphi al representation we introdu e
a ouple of graphi al obje ts: A lumped system is a apa ity with uniform
intensive properties of nite or innite small dimension. An oval represents
a distributed system in whi h the intensive properties are a fun tion of the
lo ation. The spe ial distribution may be indi ated with 1-D, 2-D or 3-D or
with a small set of arrows indi ating the respe tive o-ordinate system. The
boundary may or may not be uniform where uniform is referring to the intensive
properties. In ase of onne ting with a lumped system the ommon boundary
must be uniform be ause lumped is here synonymeous to uniform. Similarly 1-D
distributed systems onne t only in one dire tion to a distributed system et .
Conne tions are shown as arrows, whi h expand when onne ting distributed
boundaries. They represent transfer of extensive quantity a ross a pie e of
surfa e the onne ted two systems have in ommon. The gure Figure 2.3 shows
the main omponents in luding a reservoir, latter being an abstra t innite large
apa ity.
20 CHAPTER 2. MAPPING THE WORLD

reservoir
steady state
lumped system

lumped onne tion


distributed onne tion
distributes system

1D 2D 3D

Figure 2.3: Lumped and distributed systems and onne ting


streams

This abstra t representation ontains the main pie e of information, namely


how the person modelling the pro ess sees the pro ess in terms of dynami s
and distribution. The term granularity refers to the size of the ontrol volumes
hosen to represent the ontainment. Many ontrol volumes: ne granularity,
few: oarse granularity. The hoi e of the granularity is motivated by the time
onstant, a kind of response time that is a hara teristi relating the apa ity to
the ows ae ting it or the number of apa ities being required for an adequate
des ription of the distribution. An example for the rst is the modelling of
the mixing behaviour in a stirred tank versus the mu h longer time it takes
to hange over the ontents by ex hanging the ontents through inow and
orresponding outow thereby repla ing the old ontents. An ex ellent example
for the se ond-type of motivation is a distillation olumn in whi h the intensive
properties hange with ea h tray.

2.1.3 Basi Building Blo ks


The result of the de omposition by making the length-s ale assumption of non-
apa ity boundaries denes two basi obje ts, namely what we will refer to as
systems, whi h have the ability to a umulate extensive quantity and onne -
tions, whi h do not a umulate but have a resistan e to the transfer of the
extensive quantities a umulated in the systems. Thus we dene:

Denition - System : An obje t that has the ability of a umulating


extensive quantity.

Denition - Conne tion : An obje t that transfers extensive quan-


tities between adja ent systems.

The des ription of the behaviour of the pro ess is then given by the stru ture of
the topology and the assembly of the behaviours of the two basi omponents.
2.1. MAPPING NATURE 21

The systems' des riptions are based on the dynami balan es of the onserved
quantities, whi h dene the state of the system. Sin e a apa ity is mostly
asso iated with mass whi h o upies a nite volume, literature uses frequently
the term ontrol volume in this ontext. We take a more generi view in that
apa ity ee ts are not limited to mass or volume but may also be asso iated
with other obje ts su h as surfa es and points; in fa t what will evolve below
an be generalized to any kind of abstra t system and onserved quantities
(Se tion 2.2.1).

2.1.4 Time S ales


When employing the term time s ale, we use it in the ontext of splitting the
relative dynami s of a pro ess or a signal, whi h is the result of a pro ess,
into three parts with a entral interval where the pro ess or signal shows a
ontinuous-dynami behaviour. This dynami window is on one side guarded
by the part of the pro ess that is viewed as being too slow to be onsidered
in the dynami des ription, thus is assumed onstant. On the other side the
dynami window is atta hing to the dynami s of the sub-pro esses that are
viewed as o urring so fast that they are abstra ted as events-they just happen
in an instant. Any modelling pro ess requires these assumptions and it is the
hoi e of the dynami window that determines largely the delity of the model
in terms of imaging the pro ess dynami s.
time s ale

events dynami window stati

small = fast large = slow

Figure 2.4: The dynami window in the time s ale.

One may argue that one should then simply make the dynami window as large
as probably possible to avoid any problems, whi h implies an in rease in om-
plexity by growing the limits to innity ultimately embra ing the whole of the
universe. Philosophi ally all parts of the universe are oupled, but this ultimate
model is not a hievable. When modelling, a person must make hoi es and
pla e fo al points, both in spa e as well as in time. The purpose for whi h the
model is being generated is thus always ontrolling the generation of the model
((Apostel, 1960), (Aris, 1978)) and the modeller, being the person establish-
ing the model, is well advised to formulate the purpose for whi h the model is
generated as expli it as possible.
Thus one of the main hoi es to be made is the window in the time s ales, whi h
must thus be pi ked in advan e. For pra ti al reasons it will be dierent from
zero and innity: On the small time/length s ale one will ultimately enter the
zone where the granularity of matter and energy omes to bear, whi h limits the
appli ability of ma ros opi system theory and at the large end, things get quite
qui kly infeasible as well. Whilst this may be dis ouraging, having to make a
hoi e is usually not really imposing any serious onstraints, at least not on the
22 CHAPTER 2. MAPPING THE WORLD

large s ale. Modelling the movement of te toni plates or the material ex hange
in ro ks asks ertainly for a dierent time s ale than modelling an explosion,
for example. There are though ases, where one tou hes the limits of the lower
s ale, that is, when the parti ulate nature of matter be omes apparent. In most
ases, however, a model is used for a range of appli ations that quite denitely
also dene a time-s ale window.
The dynami s of the pro ess is ex ited either by external ee ts, whi h in gen-
eral are onstraint to a parti ular time-s ale window or by internal dynami s
resulting from an initial imbalan e or internal transposition of extensive quan-
tity. Again, these dynami s are usually also onstraint to a time-s ale window.
The maximum dynami window is thus the extremes of the two kinds of win-
dows, that is, the external dynami s and the internal dynami s. A good model
is balan ed within its own time s ales and the time s ale within whi h its envi-
ronment operates.

2.2 The Nature of the Elementary Systems


Having dened two elementary omponents, namely system and onne tion, the
nature of these two omponents is to be dened and dis ussed. First the system:

2.2.1 System Dynami s: Conservation of Extensive Quan-


tities
A system is hara terised by having the ability to a umulate extensive quantity,
thus has apa ity. The behaviour of a apa ity is des ribed by the onservation
prin iples, whi h is a balan e that equates the a umulation of the balan ed
extensive quantity with the streams arrying this extensive quantity a ross the
system's boundary and the internal transposition of the onserved quantity into
others.
The extensive quantities depend on the extent (size) of the system, thus the
adje tive extensive. Not all extensive quantities, though, are onserved. For
example, whilst volume is learly a fun tion of the size of the system and thus
an extensive quantity, it is not ne essarily onserved1. The ones that are on-
served, mainly mass (n), energy (E ) and momentum (M ), we all fundamental
extensive quantities and we use the symbol Φ for a generi onserved extensive
quantity. The set of onsidered fundamental extensive quantities is denoted by
E f , whi h is a subset of the set of extensive quantities ϕ, thus E f ⊂ E . The set
of fundamental extensive quantities is minimal in the sense that it ontains the
minimal number of quantities (== state variables) ne essary to span a state
spa e for the representation of the systems. The apa ities are in general dis-
tributed, that is, the state of the system is a fun tion of the spatial o-ordinate
as well as of time and the model takes the form of partial dierential algebrai
equations.
1 An all time favourite is the glass of water to whi h sugar is being added.
2.2. THE NATURE OF THE ELEMENTARY SYSTEMS 23

2.2.2 Integral Balan e


The onservation for an arbitrary pie e of the spa e, whi h is termed system S ,
balan es the a umulation of the extensive quantity in the domain with the ow
a ross its boundary Ω and the internal transposition of extensive quantities. In
most ases the pie e of spa e is a volume, but it may also be a geometri al
obje t of lower dimension, thus a surfa e or a point. A system must only be
able to a umulate extensive property and not overlap with any other system.

ux ve tor δ ϕ̂
boundary Ω

external normal ve tor ω


r3
system S
r2 boundary movement vΩ

r1
Origin of the xed observer

Figure 2.5: An arbitrary system dened by a volume or, equiva-


lently, its boundary.

The general onservation law may be written as: 0


Φ̇S := ϕ̂Ω + NTS ϕ̃S . (2.1)

whereby Φ̇S is the a umulation of fundamental extensive quantity in the system


S , ϕ̂Ω is the net ow of fundamental extensive quantity a ross the boundary
and ϕ̃S the net transposition rate of extensive quantity with NTS ree ting the
transposition ratios, whi h for hemi al rea tions are alled the stoi hiometri
oe ients. We generally assume to model ontinuous system where potential
elds drive the transfer, su h that all assumptions of eld theory apply. In
parti ular we assume ontinuity of the (fundamental) extensive quantities in
the spatial domain or region. (see ontinuity onditions (Se tion 2.4.1.1). In
doing so, we may expand the transposition term to a volume integral V · dV of
R
the intensive quantity δ ϕ̃S ,whi h is the extensive quantity produ tion normed
by the volume:
Z
ϕ̃S := δ ϕ̃S dV . (2.2)
V
The ow a ross the boundary is the sum of two terms:

1. The integral over the boundary · dV of the ux tensor2 δ ϕ̂ ∈ R3 ×


R
Ω S
dim δ ϕ̂S
R that is proje ted onto the external normal ve tor3 of the bound-
ary and
2 Here the index of the tensor are hosen to be in the order of spatial o-ordinate and type
of extensive quantity.
3 pointing away from the boundary when positiv, thus dening a o-oridnate system point-
ing out
24 CHAPTER 2. MAPPING THE WORLD

2. the ow a ross the boundary due to the movement of the boundary with
velo ity vΩ :
Z Z
ϕ̂Ω := − δ ϕ̂T ω dΩ − δΦS∨E vTΩ ω dΩ . (2.3)
Ω Ω

The se ond term maps the velo ity onto the external normal ve tor of the surfa e
to obtain the normal ow per unit surfa e, whi h then is weighted with the
density of the domain into whi h the boundary expands. The notation of δΦS∨E
indi ates the dependen y of the density on the sign of the movement of the
boundary, if one allows for the density to hange dis ontinuously at the boundary
as the boundary may represent a moving phase boundary. All velo ities are
measured relative to a xed observer in spa e as indi ated in Figure 2.5
Ex luding the ase where a umulation is o urring in the boundary itself, as
this may be the ase with ele tri al harges, the a umulation term is:
d V (t)
Z
Φ̇S := δΦS dV . (2.4)
dt 0
With the volume being:
Z tZ
V (t) := vT ω dΩ . (2.5)
o Ω

Expanding the a umulation term using the generalized Leibnitz rule (Se -
tion 7.2.1):
d V (t) ∂δΦS
Z Z
δΦS dV := −δΦS∨E V̇ (t) + dV , (2.6)
dt 0 V ∂t
∂δΦS
Z Z
:= − vT ω dΩ δΦS∨E + dV , (2.7)
∂t
ZΩ ZV
∂δΦS
:= − δΦS∨E vTΩ ω dΩ + dV . (2.8)
Ω V ∂t
with vΩ being the velo ity ve tor of the boundary relative to the stationary
o-ordinate system. Substituting these terms in the onservation law yields
∂δΦS
Z Z Z
dV − T
δΦS∨E vΩ ω dΩ := − δ ϕ̂T ω dΩ −
V ∂t Ω Ω
Z Z
− δΦS∨E vTΩ ω dΩ + NST δ ϕ̃S dV ,
Ω V
whi h redu es to:

∂δΦS
Z Z Z
dV := − δ ϕ̂T ω dΩ + NTS δ ϕ̃S dV , (2.9)
V ∂t Ω V

The surfa e integral over the ow may be transformed into a volume integral
by applying Gauss' divergen e theorem to ea h term:
 T !T
∂δΦS ∂
Z Z Z
T
dV := − δ ϕ̂ dV + NS δ Φ̃S dV , (2.10)
V ∂t V ∂r V
2.3. INTERNAL DYNAMICS 25
 
The ve tor ∂
∂r := ∂
∂ri is the gradient operator.
∀i

2.2.3 Dierential Balan e


Redu ing the volume to zero, the onservation equation for ea h point volume
is obtained:

 T !T
∂δΦS ∂
:= − δ ϕ̂ + NTS δ ϕ̃S , (2.11)
∂t ∂r

The same result is obtained using the small box approa h (Lin and Segel, 1988)
in the hange of ow from one end of the small box to the other end is obtained
as the rst variation. Often this is also alled the shell balan e (see Bird et al.,
2001; Sears, 1963).

2.3 Internal Dynami s:


Transposition of Fundamental Extensive Quan-
tities
The terminology used in onne tion with a transposition of one extensive quan-
tity into another one is usually asso iated with the ausing phenomena: for
example hanging the hemi al spe ies is a rea tion, but energy onversion due
to shifting mole ules past ea h other is termed fri tion, hanging the state of ag-
gregation is a evaporation, sublimation or ondensation and transmuting mass
into energy is termed a nu lear rea tion. Some of these phenomena o ur at
the phase boundary, whilst others are o urring inside the system. In all ases,
though, the transposition term vanishes when ombining the quantities being
onverted. For the rea tive ase, the sum of all omponent mass balan es elim-
inates the umulative rea tion term.
The phenomena asso iated with phase hange o ur at a phase boundary. In
most ases it is assumed that the boundary itself does no possess the ability
to a umulate mass, in whi h ase, transpositions o urs instantaneously (Se -
tion 2.4.2.1). Spe ies may though be seen to a umulate on a surfa e through,
for example, adsorption or absorption pro esses. In this ase thin lms are
abstra ted as thin layers that may a umulate mass, whi h again are an ab-
stra tion asso iated with hoosing pro ess-relevant time and length s ales.
Sin e the des ription of rea tive systems is the ri hest of them all, the others an
be seen as a subset of the rea tion s heme des ription with the main dieren es
being the stoi hiometri  oe ients. For non-rea tive systems the latter are
simply one. Chemi al and biologi al rea tions onvert one or several types of
spe ies into a set of other spe ies.. The relation between the rea tants and the
produ ts is des ribed by hemi al or biologi al equations known as stoi hiomet-
26 CHAPTER 2. MAPPING THE WORLD

ri equations. Dening the arbitrary spe ies as Ai ∈ A, latter being the set of
spe ies, one denes the rea tion symboli ally with the equation:

0 := NT A . (2.12)

The produ tion term an be written as:

ϕ̃ := V NT η̃ , (2.13)

Whereby the dierent quantities are:


NT :: stoi hiometri matrix
:= [νri ]∀r,∀i
A :: spe ies ve tor
:= [Ai ]∀i
νri :: stoi hiometri oe ient rea tion r and spe ies i
η̃ :: extend of rea tion  a rea tion rate normed by vol-
ume and stoi hiometri oe ient.
With the produ tion rate being a fun tion of the molar on entrations :

η̃r := η̃r (c) . (2.14)

Where T is the temperature and c is the on entration ve tor usually the mass
normed by the volume.
The fun tion η̃r (c) is often of the form:
γ
Y
η̃r := kr (T ) cj rj , (2.15)
∀j

whereby the γ is usually the respe tive stoi hiometri oe ients, though not
ne essarily. Con entrating this up into matrix form, one may write:

η̃ := K(T ) g(c) . (2.16)

The K is the diagonal matrix of the rea tion onstants the rest being aptured
in the ve tor of fun tions g(c). The idea behind this model is that the mole ules
have to meet in order to undergo rea tion. The probability to meet is then
related to the density being the number of mole ules in a volume. Choosing a
statisti al argumentation though yields a dependen y on the mole fra tion and
not the on entration (Denbigh and Turner, 1971) but in pra ti e it is observed
that the molar on entration provides a better mat h. Whilst this is the most
ommonly used des ription for simple rea tion kineti s, there exist many more
for more ompli ated ases.

Annotation: Rea tions in Volumes or on Surfa es

Here we have hosen to have the transposition me hanism to o ur in the interior


of the system. One an also view the onversion of one mass as being pla ed
in an imaginary system that is oupled through imaginary mass streams with
the system representing the physi al ontainment. This imaginary system is not
a umulating any mass but has mass inows of the exa t amount of rea tants
2.4. EXTENSIVE QUANTITY TRANSPORT 27

being presently onverted and eje ting the orresponding amount of produ ts.
Both views are presented in the literature independently. We usually take the
rst viewpoint, though a hange of the representation ree ting the se ond
approa h is readily a hieved by splitting the omponent mass balan es into two
separate balan es one that ree ts the hydrauli and a pseudo-steady state one
for the rea tion system.

Annotation: Rea tion Constants

The literature often gives the rea tion dynami s in form of a rate of disappear-
an e or appearan e of a spe ies per unit volume
νri ′
r̃ri := k g(c) . (2.17)
|νri | ri
It is up to the user then to s ale it properly with the stoi hiometri oe ients:
ϕ̃ νri
:= r̃ri . (2.18)
V |νri |
The expression used here assumes appropriate s aling of the rea tion onstants,
thus

kri
kr := . (2.19)
|νri |
This removes the dependen y of the rea tion onstant on the spe ies being taken
as the referen e in the denition of the rea tion rate.
The se ond omment relates to the term  onstants being used in this ontext.
The rea tion onstant is a strong fun tion of the temperature. The standard
model is known as the Arrhenius law, that is:
kr (T ) := kro e−EAr /RT . (2.20)

kro :: the pre-exponential fa tor


EAr :: the a tivation energy
R :: gas onstant
T :: absolute temperature
Taking this approa h removes the dependen y of the rea tion onstant on the
spe ies and onsequently an be seen as a logi al reformulation of what is used
and seen as standard in the hemi al engineering and physi al hemistry om-
munity.

2.4 Extensive Quantity Transport


2.4.1 Basi Transfer Laws
At the turn of the last entury Josiah Willard Gibbs ast thermodynami s into
a generalized framework by onsidering the analogy with ontinuous me han-
i s. He lassied the thermodynami variables into pairs su h as entropy and
28 CHAPTER 2. MAPPING THE WORLD

temperature, volume and pressure, surfa e area and surfa e tension, molar mass
and hemi al potential, but also ele tri al eld and dipole moment et . He also
denes the natural variables of the energy E , whi h are extensive quantities
in luding entropy S , volume V , spe ies mass n, dipole moment P , magneti
moment M , surfa e S , length s, to mention the main ones (Alberty, 1994).
Through these developments the transfer des ription was tied to one parti u-
lar fundamental extensive quantity that provides a measure for the transport
potential, whi h in natural system is the energy.

2.4.1.1 Continuity Condition

The ontinuity onditions are a ree tion of the basi assumption in ma ros opi
eld theory, namely the ontinuity of the fundamental extensive quantity in the
spatial domain. Let the internal energy U and the omponent mass n be the
fundamental extensive quantity energy, then ma ros opi eld theory assumes
ontinuity of the fundamental extensive quantities in the spatial domain, that
is, the length s ale is so large that the quantisation of mass and energy is not
observable and the world appears as a ontinuous entity in the spa e it o upies.
Writing the internal energy as a fun tion of its natural extensive variables S, V, n,
whi h in turn are a fun tion of the spatial o-ordinate r : E(S(r), V (r), n(r)),
the spatial derivative of the internal energy is:

∂U ∂U ∂S ∂U ∂V ∂U ∂n
:= + + . (2.21)
∂r ∂S ∂r ∂V ∂r ∂nT ∂r

With the extensive quantities being ontinuous in the spatial spa e, also all the
∂S , ∂V , ∂nT in the expression are ontinuous.
other partial derivatives, namely ∂U ∂U ∂U

They are also of intensive nature (Euler degree zero) and are point properties
of the system. Their gradient is the driving for e for the transfer of the onju-
gated extensive quantity. These quantities are thus spe ial intensive quantities,
whi h, be ause they drive the extensive quantity transfer: The rst of the above-
dened partial derivatives are the thermodynami temperature; the se ond is
the pressure and the third the hemi al potential 4 :
 
∂U
T := (2.22)
∂S V,n
 
∂U
p := (2.23)
∂V S,n
 
∂U
µ := . (2.24)
∂nT V,S

In the ase where dierent elds are present and a ting on for example mass,
su h as an ele tri al eld and a on entration eld, all for es must be onsidered
4 Note the di ulty in nomen lature of these quantities: whilst energy is a potential, the
hemi al potential is really not a potential, but hemi al potential and omponent mass form
a onjugate pair...
2.4. EXTENSIVE QUANTITY TRANSPORT 29

in the above denition5 . Thus are must be taken when dening the driving
for es as the number of natural variables in reases. In these ases Legendre
transformations (Modell and Reid, 1974) are used to dene new energy fun -
tions su h as a modied Gibb's free energy, whi h when dierentiated, dene
the modied hemi al potential. The arti le of Alberty (Alberty, 1994) and
the book of Guggenheim (Guggenheim, 1967) provide a detailed dis ussion on
this issue. With the potentials and the onjugated intensive quantities being
thus ontinuous in the spatial domain, they are also ontinuous at the system
boundaries, a property, whi h is of parti ular interest at phase boundaries. As
one of the onsequen es also the ux is ontinuous as we shall see below.6 (Se -
tion 2.4.2.1)
Dening π being the driving for e, thus the ondjugate to the potential, the
ux tensors for the most ommon transfer laws are7 , 8
For an anisotropi medium with onstant  ondu tivity λ :: onstant:

δ ϕ̂ := −λ π, (2.25)
∂r
For an isotropi medium with onstant  ondu tivity λ :: onstant:

δ ϕ̂ := −λ π. (2.26)
∂r
For an isotropi medium with variable  ondu tivity λ(r): 9 :
∂ T T
δ ϕ̂ := − π λ , (2.27)
∂r
These transfer laws are not a basi law but represents a simpli ation of the
behaviour of the transfer system. It should thus be seen as bla k-box model.
All the transfer law have though two basi ommon properties. Let ∇π be
the gradient of the driving for e and the ux as fun tion of the gradient being
denoted by δ ϕ̂(∇π) then

δ ϕ̂ (0) := 0 . (2.28)
Thus there is no extensive quantity being transferred if there is no driving for e.
Additionally
δ ϕ̂ (sign (∇π)) := sign (∇π) δ ϕ̂ (|∇π|) . (2.29)
The gradient of the driving for e determines the dire tion of the ow.
These basi transfer laws des ribe the transfer within a system, thus are part
of the des ription of a distributed system.
5 For example, mass may also be transferred due to magneti and ele tri al elds
if the mole ules or parti les are harged ((Deen, 1998), (Groot and Mazur, 1983),
(Wesselingh and Krishna, 2000)).
6 Another ontinuity proof is given in (Tisza, 1966), p133.
7 Note, in order to avoid some di ulties in the interpretation, the ux is written as a
tensor (matrix) also if it redu es to a ve tor.
8 the quantities pro eeded by a δ are point properties, thus intensive quantities that measure
density in one or the other form. The intensity is the respe tive extensive quantity normed
by the volume.
9 Notation: The grad operator is here not used in transposed form, thus the representation
of remainder of the expression is adjusted a ordingly.
30 CHAPTER 2. MAPPING THE WORLD

2.4.2 Phase Boundaries


The term phase is used intrinsi ally dening a se tion of the spa e that exhibits
ertain uniformity in its behaviour, appearan e and properties. At the bound-
ary, some of the intensive properties hange dis ontinuously thus dening the
boundary:

Denition - Phase ψ : A phase is a pie e of the spa e ompletely


en losed by a phase boundary dened as the lo us of a dis ontinuity
in the physi al properties in the spa e.

Webster denes phase in the ontext of physi s as a homogeneous, physi ally dis-
tin t, and me hani ally separable portion of matter present in a non-homogeneous
physi o hemi al system. This denition builds on the on ept of me hani al
separation of phases, whi h implies a dieren e in their property that an be
exploited for their separation.
The on ept of phase may be expanded by allowing for pseudo phases, that is,
spatially averaged phases:

Denition - Pseudo Phase ψ̄ : Spatial domain with spatially


averaged properties en losed by a phase boundary.

This averaging may be interpreted as an order-of-magnitude assumption for the


length s ale being modelled, whi h also implies the hoi e of a time s ale.

2.4.2.1 Flux Condition

Besides the ontinuity ondition dis ussed above, the ux ondition is essential
for the solution. Balan ing about a phase boundary is of parti ular interest.
The integral balan e for a ontrol volume (System S := a + b) pla ed about the
phase boundary (Figure 2.6) reads

A system b
I phase boundary
phase β
s ϕ̂−ε ϕ̂+ε
D
s B
phase α phase β
system a J
phase α
C
phase boundary Ωa+b

Figure 2.6: Balan ing about a phase boundary.


2.4. EXTENSIVE QUANTITY TRANSPORT 31
Z Z
Φ̇a+b := − δ ϕ̂T ω dΩ + NTa+b δ ϕ̃a+b dV . (2.30)
Ω V

The overall ontrol volume splits into two parts, namely a part (system a) on
one side of the phase boundary and one part (system b) on the other side of
the boundary. The surfa e of the overall ontrol volume is split into six se tions
namely the two running along the phase boundary, ΩAB , ΩCD and four small
pie es for the two edges rossing the phase boundary: ΩBJ , ΩJC , ΩDI , ΩIA :
Z Z Z
Φ̇a+b := − δ ϕ̂T ω dΩ − δ ϕ̂T ω dΩ − δ ϕ̂T ω dΩ
ΩIA ΩAB ΩBJ
Z Z Z
+ δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ
ΩJC ΩCD ΩDI
Z
T
+ Na+b δ ϕ̃a+b dV . (2.31)
V

Letting the distan e s of the boundary ΩAB and ΩCD approa h zero, the in-
tegrals over the edges disappear and learly, the a umulation term also ap-
proa hes zero, if there is no a umulation in the boundary itself. The transposi-
tion term redu es to the transposition taking pla e in the boundary itself, su h
as ee ts asso iated with the phase hange.
Z Z Z
0 := − δ ϕ̂T ω dΩ + δ ϕ̂T ω dΩ + Na+b
T
δ ϕ̃a+b δ(s − 0) dV
ΩAB ΩCD V

:= +ϕ̂−ε − ϕ̂+ε + NTa+b δ ϕ̃Ω , (2.32)


a+b

with δ(s − 0) being the Dira delta fun tion.


The transfer of extensive quantity through the interfa e thus balan es the inow
with the outow and the transposition of extensive quantity in the interfa e.

2.4.2.2 Jump Condition

The ontinuity onditions give rise to the jump onditions at the phase bound-
ary: Let π−ε and π+ε be the onjugate to the potential left and right of the
boundary, then the ontinuity onditions states that

π−ε := π+ε . (2.33)

Given a relation of the potential to another asso iated intensive quantity z of


the form f (z), the ontinuity ondition gives:

f−ε (z−ε ) := f+ε (z+ε ) , (2.34)

whi h gives a jump ondition in the eld of z:


−1
z−ε := f−ε (f+ε (z+ε )) . (2.35)

Frequently, this equation takes the form:

z−ε := k−ε|+ε z+ε . (2.36)


32 CHAPTER 2. MAPPING THE WORLD

Example: Nernst Distribution Coe ient Let a represent the system on


the left and b the system on the right of a phase boundary then the equilibrium
ondition for the diusional mass transfer is equal hemi al potential assuming
an arbitrary spe ies A:

µa,A = µb,A (2.37)

For the hemi al potential we assume the simple model

µs,A := µos,A + RTs ln xs,A (2.38)

whi h assumes ideal solutions for the omponent A in both phases, xs,A being
the mole fra tion. The Nernst distribution onstant gives the ratio of the two
on entrations in the phases:
 o
µb,A − µoa,A

xa,A
:= exp (2.39)
xb,A RT

whi h uses the fa t that the temperature is equal at the boundary ( ontinuity
ondition of the potentials at the surfa e).
2.5. VARIABLES: NATURE, ROLE AND TRANSFORMATIONS 33

2.5 Variables: Nature, Role and Transformations


A mathemati al model onsists of equations and variables. By onstru ting
it from the base physi al on epts gives meaning to ea h of the introdu ed
variables. There is rst the state: it is the set of variables that form the base
spa e, the state spa e for whi h reason we use the term fundamental state
for these variables. They are easy to nd in a model, as they are the ones
that appear in the basi a umulation term, namely the onserved quantities.
The onservation says that the hange of the systems onserved quantity is the
sum of the net inow of the onserved quantity and the internal transposition
of the onserved quantity. Thus the onservation introdu es besides the time
derivative of the onserved quantity or its respe tive density, also the ows or
uxes and the transposition or the onversion rates. Both are a fun tion of state-
dependent quantities. The ows are driven by the onjugates to the potentials
and the transposition by on entrations for example. These state-dependent
quantities we term se ondary state variables thereby indi ating that they
are the result of a mapping of the fundamental state. In addition variables
are introdu ed that des ribe the hara teristi s of the underlying system, the
properties. Often these are material properties whi h in turn again may be
a fun tion of the fundamental state or the se ondary state variables. Finally
the term parameters is used for quantities that appear usually as oe ients
in approximate partial models, su h as polynomial approximations of physi al
properties. For things that are onstant otherwise, or assumed onstant or
known as a fun tion of time the term onditions shall be used.
To establish these on ept further the nature of a generi state-spa e represen-
tation is dis ussed next:

2.5.1 A Proper State-Spa e Representation


On e a model is ompleted, the number of variables in the model need to be
equal the number of equations whereby the assignment of a numeri al value to
a variable is ounted as an equation. Whilst this appears to be a rather simple
matter, it turns out that it is not quite su h an easy bookkeeping task.
When mapping nature into a mathemati al obje t we followed a well spe ied
order and when properly implemented the pro edure leads to a omplete model
meaning that the number of equations mat hes the number of variables. The
pro edure starts with formulating the relevant onservation laws usually in the
dierential form (Equation (2.11)). The onservation equations dene the basi
state spa e with the onserved quantities being the omponents of the base state.
This spa e spanned by the fundamental is minimal dimensional if one hooses
the primitive onserved quantities, for example the set of omponent masses,
the total energy, the momentum and ele tri al harge. Obviously hoosing the
omponent masses and the total mass is not a minimal hoi e, as the total
mass is a linear ombination (sum) of the omponent masses. However, for the
des ription of pro esses with no rea tion or any other matter transposition and
no separation of hemi al or other spe ies is taking pla e, a total mass balan e
may be su ient. The use of the term minimal is thus to be handled with some
34 CHAPTER 2. MAPPING THE WORLD

are.
Let us dene a fundamental state spa e onsisting of the onserved quantities
required to apture the behaviour of the primitive system, thus x := Φ and the
dierential onservation (Equation (2.11)) and its integral:10

ẋ := ẋ(x̂, x̃) , (2.40)


Z t
x(t) := ẋ dt + x(0) . (2.41)
0

The integral denes also a new set, namely the initial onditions x(0) := Φ(0).
Assuming that the initial onditions are given, thus adding the equations:

x(0) := given , (2.42)

The balan e equations dene two new sets of variables, namely the ows x̂ := ϕ̂
and the transposition x̃ := ϕ̃ of extensive quantities.
The next step in building the model is to dene the transport of extensive
quantity and thereafter the transposition. The transport is always linking two
systems together. Thus we introdu e the dire ted ow from system a to system
b as

x̂a|b := x̂a|b (ya , yb , pa|b ) , (2.43)

with pa|b denoting the properties of the physi al transport system. The ow
equation introdu es a set of new variables, namely the driving for es and the
properties asso iated with the interfa e with respe t to the transferred quantity.
The driving for es are a fun tion of the state of the respe tive system and thus
state dependent information. To signify this fa t we introdu e the term of a
se ondary state symbolised by the hara ter y . The driving for es take a spe ial
role in the formulation of the model (Page 28).
In ontrast to the transport, the transposition takes pla e inside the system and
is thus only a fun tion of the state of the system in whi h it takes pla e:

x̃s := x̃s (ys , ps ) . (2.44)

Also the transposition is driven by a set of variables that are a fun tion of the
state of the system. A set of property variables ps enable to hara terise the
transposition in its given environment.
Both the transfer and the transposition introdu es a state-dependent set of
variables, whi h we termed se ondary state variables. These variables must be
a fun tion of the state, thus the type of equations determining them are of the
generi form:

ys := ys (ys , xs , ps ) . (2.45)
10 Note that the notation for the fun tion is abbreviated in that whenever it is meaningful
the fun tion name is identi al with the variable name, whi h improves readability.
2.5. VARIABLES: NATURE, ROLE AND TRANSFORMATIONS 35

This set of equations is in general impli it, though most of the equations are
expli it in the se ondary state variables. It is mostly the temperature, whi h
is appearing impli itly in the equations. What, though, must be the ase is
that this set of equations must be a mapping from the fundamental state spa e
spanned by the x-variables. It must thus be possible to solve the above set of
equations for ys :

ys := ys (xs , ps ) . (2.46)

If it is not possible analyti ally, whi h is nearly only the ase if it is a linear
fun tion of the temperature, the equations must be solvable numeri ally. The
reader should note that the solutions may not always be unique and sele ting the
orre t solution may not be a trivial task. Equations of state are obje ts that
are posing this problem notoriously. Whilst this is not a prin iple stru tural
issue, it ertainly is a pra ti al one and it should be kept in mind even though
we shall ignore it for the time being.
The properties being introdu ed as additional quantities must also be a fun tion
of the state, very similarly to the se ondary state variables. In fa t one ould
merge them with the se ondary state, would there not very often be a spe ial
signi an e asso iated with them. Thus:

pi := pi (y, x, Θi ) ; i ∈ {a|b, s} . (2.47)

Given the parameters:

Θi := given , (2.48)

the model is well-posed as we an show by substitution:

ys := ys (xs , ps (y, x, Θi )) , (2.49)


ys := ys (xs , Θs ) , (2.50)
x̂a|b := x̂a|b (xa , xb , Θa , Θb ) , (2.51)
x̃s := x̃s (xs , Θs ) , (2.52)
ẋs := ẋs (xa , xb , Θa , Θb ) ; s ∈ {a, b} , (2.53)
Z t
xs (t) := ẋs (xa , xb , Θa , Θb ) dt + xs (0) ; s ∈ {a, b} . (2.54)
0

The last two equations are a set of ordinary dierential equations and its integral
over time, respe tively, a standard initial value problem.
The two keys to the formulation of the model are thus the hoi e of fundamental
variables and the respe tive onservation prin iples and the mapping of the
se ondary state variables from the fundamental state that is dened in the rst
step. The hosen formulation is independent of the number of systems involved
in the des ription, as the substitution always involves only two adja ent systems
and an thus be done re ursively over the number of systems. As we shall see
later, this is also the ase when making order-of-magnitude assumptions. This
lo ality prin iple does not break down with making simplifying assumptions if
handled properly.
36 CHAPTER 2. MAPPING THE WORLD

2.5.2 The Text Book Representation


The above representation (Equation (2.40))- (Equation (2.48)) represents the
plant in the form of a set of ordinary dierential equations augmented with a
set of dierential equations. Traditionally one tea hes to represent the model
in the spa e of the observed quantities, where observed is to be interpreted as
measurable. Thus text books will usually aim at a representation in the spa e
of a set of intensive variables su h as on entration, temperature and pressure
a ompanied with an extensive quantity, often volume. Su h a representation
is usually not minimal but in ludes obsolete information. For example hoosing
the on entration ve tor, temperature, pressure and volume ontains at least
one obsolete variable as one of the on entrations is a fun tion of the others
and the volume. The mapping into the spa e of observables is motivated by
the fa t that one is most often interested in these quantities and not in the
onserved ones. Also, it results in a set of ordinary dierential equations that
an be solved using standard te hniques. No need for a DAE solver. It is thus
no surprise that one an nd statements su h as: early substitution is a good
pra ti e in standard textbooks.
The two representations are formally linked by a variable transformation. Let
the model (Equation (2.40))- (Equation (2.48)) be aptured in the form
ẋ := f (v) , (2.55)
v := g(y) , (2.56)
y := h(x) . (2.57)
where x is the ve tor of onserved quantities and v the ve tor of transports and
transpositions and y again the ve tor of se ondary states.
Seeking a representation in the se ondary state, for example, one dierentiates
the se ondary state with respe t to time:
∂h(x)
ẏ := ẋ , (2.58)
∂x
∂h(x)
:= f (g(y)) . (2.59)
∂x
∂h(x)
If the expression ∂x is not fun tion of x the result is, as desired, a fun tion
of y only.
In some ases equation Equation (2.57) is given expli it in the primary state:
x := d(y) , (2.60)
then
∂d(y)
ẋ := ẏ , (2.61)
∂y
∂d(y) −1
 
ẏ := f (g(y)) , (2.62)
∂y
 ∂d(y) 
in whi h ase ∂y must be invertible.
2.5. VARIABLES: NATURE, ROLE AND TRANSFORMATIONS 37

2.5.2.1 Example: Heating a Lump of Material

To demonstrate the standard transformation, let us look at a very simple system,


namely a lump of material that is heated up through a ondu tive heat stream.
Let E, S be the environment and the lump of material, respe tively. Then the
energy balan e is given by:

U̇S := q̂E|S − ŵS|E . (2.63)

The heat ow is modelled by:

q̂E|S := −cE|S (TS − TE ) , (2.64)

with cE|S being the overall heat transfer oe ient times the transfer area, and
the system volume work term:

ŵS|E := −p V̇ . (2.65)

For onstant pressure this redu es to an enthalpy balan e:

ḢS := q̂E|S . (2.66)

With the state variable transformation:


Z TS
∂HS
HS := dT , (2.67)
Tr ∂T
Z TS
:= Cp (T ) dT . (2.68)
Tr

whi h is expli it the ase where the state variable transformation is expli ite
in the primary state variable, here the enthalpy. The transformation thus fol-
lows the se ond s heme. Indeed the required invertibility ondition is satised.
Following the re ipe the dierentiation of the variable transformation gives:

ḢS := Cp (TS )ṪS . (2.69)

Note that Cp (TS ) is the total heat apa ity here. Substitution yields:
−cE|S
ṪS := (TS − TE ) . (2.70)
Cp (TS )

2.5.3 State Representations are not Unique


Whilst the on ept of state is rooted in physi s, parti ularly in thermodynami s,
there is no unique state representation, at least in mathemati al terms. Let the
dynami model (Equation (2.40))- (Equation (2.48)) aptured in the form

ẋ := f (y) , (2.71)
y := g(x) . (2.72)

where x is the ve tor of onserved quantities and y the ve tor of all algebrai
quantities dened in the model.
38 CHAPTER 2. MAPPING THE WORLD

Introdu ing a state variable transformation of the form:

z := T x . (2.73)

where only the transformation matrix T must be non-singular if one wants to


be able to re onstru t the original state later. The transformed model reads:

ż := T ẋ := T f (y) , (2.74)
y := g(T −1
z) . (2.75)

Sin e there are innite many transformation matri es that satisfy the invert-
ibility ondition an equal number of equivalent state spa e representations are
possible just through linear transformation.

2.5.4 Minimal and Non-Minimal Representations


Starting with the onservation prin iples results always in a minimal representa-
tion if one uses the basi s only, that is, omponent mass, energy and momentum,
for example. One needs only to take are of dupli ating information su h as us-
ing the omponent mass balan es and the total mass balan esbut then that is
rather obvious indeed.
Any transformation that has the same information ontents, is either of the
same dimension or it is larger. The latter is quite ommon. For example pro ess
des riptions are usually given in terms of the intensive quantities plus at least
one extensive quantity, su h as on entration and volume, whi h is in dimension
one bigger than the minimal.

2.6 Simplifying the Pi ture: Making Se ondary


Assumptions
The previous se tions des ribe the steps and elements resulting in an initial
distributed model, whi h is a network of oupled distributed systems. At this
point the assumption spa e splits and takes dierent routes, the dierent routes
meeting in some ases later down the road again. Thus the dis ussion annot
be stri tly sequential either making things a little bit more di ult to follow.
In a rst instan e, we dis uss a ouple of ommon assumptions, whi h lead to
a set of simpli ations for the individual distributed model. In parti ular, the
intera tion between the primitive systems is looked at in dierent ways and
a new term transfer system is introdu ed, whi h is very ommonly used to
des ribe so-to-speak a realisation of what in thermodynami s is often referred
to as semi-permeable walls.
2.6. SECONDARY ASSUMPTIONS 39

PDE PDE PDE PDE PDE

lumpy surfa es and physi al transfer system

PDE PDE

no internal mixing approx integral


fast internal mixing

plug ow single lump

no apa ity oupled ODE's


no apa ity no apa ity

zero apa ity

steady-state network

Figure 2.7: Dierent ways of simplifying a distributed system being


dually onne ted, for example
40 CHAPTER 2. MAPPING THE WORLD

2.6.1 Simplied Primitive Systems: Building Blo ks for


Simpler Networks
2.6.1.1 Lumpy Boundaries

When fra tioning the overall volume into smaller volumes, one generates surfa e
elements that separate adja ent systems. In most ases, one is not interested
in the ux, but rather in the total ow a ross su h a surfa e element, whi h is
one reason for whi h one lumps the boundary. Se ondly, one may have more
than one type of intera tion between two adja ent systems, for example there
may be a heat ow through a non-porous physi al wall and ow through an
opening in the same physi al wall, whi h allows the two systems to intera t
via heat transfer through the wall and mass transfer through the hole. The
lumping thus primarily splits the boundary into lo al boundary elements that
may be lassied with regard to the type of extensive quantity being transferred,
a on ept that is dire tly oupled to the typed thermodynami walls (open,
losed, adiabati , et .).

Ω2
Ω1 Ω3
r3
r2 system S
Ω4
r1 Ω6 Ω5
Origin of the xed observer

Figure 2.8: System with lumpy boundary

The umulative ow through a pie e of boundary is simply the integral over the
respe tive boundary element Ωi :
Z
ϕ̂Ω := δ ϕ̂T ω dΩ , (2.76)
i
Ωi

This integral measures the ow in the dire tion relative to the normal ve tor
of the boundary, where by onvention the normal ve tor points away from the
boundary. In the abstra tion pro ess, the systems are pi torially pulled apart
and represented as ir les, or other graphi al obje ts depending on the type of
system (Figure 2.2).
The ow through the ommon pie e of boundary between two systems is mapped
into a onne tion, whi h introdu es a unique oordinate system against whi h
the a tual ow between the onne ted systems is measured. This information
is aptured in a notation < a > |Ωi | < b > where < a > is the pla e holder
for the system where the origin of the referen e o-ordinate system is pla e and
< b > is the system at the other end, whilst the ommon boundary pie e Ωi
is pla ed between two verti al bars on either side guarded by the two systems
2.6. SECONDARY ASSUMPTIONS 41

(Figure 2.13). The referen e o-ordinate, being introdu ed for ea h onne tion,
is denoted by α ∈ {−1, 0, +1} where the  +1 indi ates a head of a onne tion
arrow, a  −1 a respe tive tail and a  0 no ow. Obviously, a ow must always
be dened between two systems, that is, ow may not just disappear or appear
into or from the void. The sum of the ontrol volumes is thus always losed
representing the pro ess-relevant (Se tion 2.1.1).
The integral balan e equation for a system with stationary boundaries, that
is vΩ := 0 reads more ompa tly when lumping the ows for the boundary
elements:
∂δΦS
Z X
dV := αc ϕ̂c + NTS ϕ̃S ,
V ∂t
∀c

:= FS ϕ̂s + NTS ϕ̃S , (2.77)


h  i
with FS := αc I ∀c , (2.78)
∀S

a blo k diagonal matrix with identity blo ks weighted with the respe tive refer-
en e oordinate and
h i
ϕ̂s := ϕ̂c , (2.79)
∀c

a sta k of all ow ve tors. The row and the olumn sums of the onne tion
matrix are zero.

2.6.1.2 Two Extreme Systems

The abstra tion of the pro ess-relevant universe into systems separated by ide-
alised walls an be further rened and generalised by re ognising two limit-
ing ases, one in whi h a system is own through with minimal internal re-
ir ulation and one in whi h the in-ow and the out-ow is minimal ompared
to the internal re- ir ulation. The limit is in both ases a redu tion of minimal
ow to no ow. Figure 2.9 In the rst ase, one assumes a zero internal re-
ir ulation whilst in the se ond ase one sets the ow a ross the boundary zero.
The argument is not only an order of magnitude assumption in the ow, but
also in the time s ale: it is assumed that the dynami window for the internal
pro ess is learly in the short time s ale, ompared to the dynami s of the ows
a ross the system's boundary.

2.6.1.2.1 Small Internal Re-Cir ulation, No Rea tions and Slow Changes
at the Boundary

In this ase, the stationary and onstant ontrol volume is pla ed in a ow
eld. The modelling is done in a range of the time-s ale, where the hanges
at the boundary are very slow, thus one may assume a stationary ow eld,
whi h has no internal re- ir ulation, that is, the url of the ow eld is zero
((Deen, 1998), (Bird et al., 2001)). This in turn implies that the a umulation
terms in both the basi balan es, the integral balan e (Equation (2.9)) and the
dierential balan e (Equation (2.11)) approa h zero, whi h is often referred to
as pseudo-steady state.
42 CHAPTER 2. MAPPING THE WORLD

u yp
no internal mixing
u yp
dead time
perfe t internal mixing
slow
u yi
yi
u
fast
time onst

time

Figure 2.9: Two extremes: on the top no mixing at all resulting in


a plug ow, whilst below there is nearly only mixing, at least in the
short time s ale resulting in the ideally mixed volume.

Thus the integral balan e (Equation (2.9)) redu es to:


Z
0 := − δ ϕ̂T ω dΩ , (2.80)

whi h with lumpy boundaries writes:


X
0 := αc ϕ̂c , (2.81)
c
:= FS ϕ̂ . (2.82)

So the inows balan e the outows, whi h mat hes the expe tations.
The dierential balan e (Equation (2.11)) simplies to:

 T !T

0 := − δ ϕ̂ (2.83)
∂r

This equation des ribes an idealised fast transfer system, in whi h the internal
transport is fast ompared to the hanges at the boundary. The transport is a
fun tion of the state of the system and the state of the onne ted system. With
the a umulation term diasappearing, the resulting set of equations be ome
algebrai from whi h the stationary distribution of the state an be omputed
as a fun tion of the onditions at the boundaries. Two examples an be found
in the appendix. The rst is analysing a very ommon assumption, namely the
heat transfer through a wall Se tion 9.1.5. The se ond one is also a heat transfer
pro ess, but it is fo using on demonstrating the ee ts of having more than just
two a tive boundary pie es Se tion 9.1.6.
Substituting the simple isotropi gradient transport law Equation (2.26), one
2.6. SECONDARY ASSUMPTIONS 43

gets:
 T  
∂ ∂
0 := − λ π . (2.84)
∂r ∂r
So this is a se ond-order dierential equation in π . For the transfer to be
omputable, the solution to the se ond-order dierential equation must exist
((Lin and Segel, 1988), p121). The existen e of a solution is dis ussed early
in the literature (Courant et al., 1928). Lin and Segel, though, expressed the
fa t (Lin and Segel, 1988), p418) that most s ientist on most o asions do not
on ern themselves with the thorny philosophi al questions that emerge from a
sear hing examination of what lies at the foundation of their endeavours. ...
The solution forms a hyper-surfa e with the boundary ondition dening the
position of this surfa e. Integrating above equation on e states that the ux
tensor δ ϕ̂ is onstant:


δ ϕ̂ := −λ π := onst . (2.85)
∂r

Two important lessons are to be drawn from this, namely the fa ts that
- the state is eliminated and
- there is no time ee t asso iated with the transfer
For simple two-a tive boundary systems su h as dis ussed in Se tion 9.1.5 the
time-s ale assumption leads to a simpli ation of the transfer system to a simple
resistan e, whi h is what the arrows in the rst pi ture of the de omposition
represent Figure 2.7.

2.6.1.2.2 Maximal Internal Flow, Slow Rea tions and Small, Slow
Flows A ross the Boundaries

In this ase one assumes stri tly no ow a ross the boundary and maximal
internal ow. Pla ing the dynami window into the small time s ale, where the
rea tions are slow and thus the turnover very small ompared to the internal
ows, the dierential balan e (Equation (2.11)) redu es to
 T !T
∂δΦS ∂
:= − δ ϕ̂ , (2.86)
∂t ∂r

Further assuming that the inow and the outow from the ontrol volume are
small ompared to the internal ows, the equilibrium is rea hed qui kly. Thus
on the larger time s ale, the internal fast dynami s are in equilibrium and no
hange with time is observed:
 T !T

0 := − δ ϕ̂ , (2.87)
∂r

Sin e the inow is negligible in this time s ale, the system is losed and the
solution is a onstant. So the intensive quantity δΦS is onstant everywhere in
the region.
44 CHAPTER 2. MAPPING THE WORLD

With the onditions in the ontents being uniform, we shift time s ale to a longer
one. Now Equation (2.9) simplies signi antly: the densities are onstant ev-
erywhere in the volume thus the volume integrals involving the densities hange
to the volume times the densities, whi h is simply the orresponding extensive
quantity:

dΦS
Z
=− δ ϕ̂T ω dΩ + NST ϕ̃S . (2.88)
dt Ω

Lumping the boundary (Se tion 2.6.1.1) and assigning the global o-ordinate,
the equation for the rea tive, ideally-mixed domain emerges:
dΦS
= FS ϕ̂ + NTS ϕ̃S . (2.89)
dt
This equation des ribes an idealised apa ity, namely a lumped system in whi h
the generalised densities, that is the extensive quantities normed by the volume,
are onstant within the ontrol volume at a time s ale that is large relative to
the internal mixing.

2.6.2 Network Representation


2.6.2.1 A Graph is a Very Ri h Representation

The de omposition of the spatial domain the plant and its relevant environ-
ment (Se tion 2.1) o upies into ontrol volumes linked by onne tions, yields
a network of apa ities and onne tions. This an be depi ted in the form of
a graph. The nodes of the graph are thus the primitive systems, and the on-
ne tions, representing the boundary onditions of ontinuous ux, are the ar s.
Both, the apa ities and the ar s may be typed, meaning they may re eive a
 olour indi ating the type of fundamental extensive quantity being ne essary
for the des ription (energy only, omponent mass and energy,. . . , for example)
and the onne tions the olour of the transferred type of fundamental extensive
quantity and its form (heat, work, omponent mass, et ). Su h a graph is shown
in Figure 2.11.

2.6.2.1.1 Assumed Nature of the Containment

The graphs are dire ted in that the arrow introdu es a referen e o-ordinate
system for ea h ow Figure 2.13. The graph shows thus the intera tion between
the apa ities that were arved from the overall system together representing
the system's entity. The graph an be used to depi t the main of the primary
assumptions asso iated with mapping the world into equations. These assump-
tions are fundamentally essential for the model, a fa t that annot be emphasised
enough. All information asso iated with the dynami s is aptured in the graph.
For example Figure 2.3 we an introdu e ir les for lumped systems and ellipses
for distributed systems indi ating the distribution o-ordinates in the ellipse.
The nature of the onne tion depends on if the surfa e has been lumped or not.
In the ase of lumped system it is by denition lumped, but in the ase of the
distributed systems, it depends and thus two dierent onne tions must be de-
2.6. SECONDARY ASSUMPTIONS 45

ned, namely one between a lumped and a distributed surfa e and one between
two distributed surfa es. Figure 2.3 depi ts an alternative.
What an one read from these graphs: Firstly, one an see on how the plant
was broken down into ontrol volumes and what type of assumptions had been
made in terms of internal dynami s of the various ontrol volumes (lumped,
distributed). One an also see on what apa ity element is talking with whi h
other one and in what way (lumped surfa e with lumped surfa e, lumped surfa e
with distributed surfa e and distributed surfa e with distributed surfa e). One
may ask the question on why one would have to distinguish between lumped
and distributed systems, if everything really is distributed. This thus ree ts
the by-the-model-designer-assumed nature of the ontainment.

2.6.2.1.2 Colouring in

Colours an be used to enri h the graph with additional information. On a rst


level one an give the transfers olours to indi ate their nature, for example
bla k for mass, read for heat and blue for work. Alternatively one an use
dierent types of arrows or line types. Figure 2.10 shows a simple bat h rea tor
taking material from two reservoirs and produ ing a output stream. A rea tion
is taking pla e, namely A + B → C . A model designer may de ide to represent
the plant as depi ted in Figure 2.11
The abstra tion is the result of a number of assumptions:

• Rea tor : Two phases only both aptured in an individual lump, thus
assuming uniform intensive properties in ea h of them
• Extra tion phase: no A, thus spe ies A is not diusing a ross the boundary
from the rea tion phase.
• Rea tion phase: produ t C is essentially insoluble and diuses into the
extra tion phase, where it on entrates up.
• Heater, assumed as a simple apa ity, ommuni ates heat with both phases.
• Sensor: is only in onta t with the extra t phase, as it is the main phase
in the rea tor.
• Feeds: the two rea tants ome from two separate reservoirs. The feed
tanks are thus essentially not modelled.
• Energy sour e: an innite sour e of energy, thus not modelled.
• Produ t tank: a separator not modelled. Would have to be modelled as a
two phase system.
• The energy supplied by the energy reservoir is onverted into heat in the
heater without any loss.
• The two phases may ex hange heat (red dotted) and mass (bla k full).
• The overow is a 2-phase ow.

The system are olour oded:

• Pink is used for system requiring an energy des ription only.


46 CHAPTER 2. MAPPING THE WORLD

inow A
inow B
temperature sensor

dropplet phase
rea tion: A + B → C
negligible C
extra t phase
no A

two-phase overow

Figure 2.10: The total plant, a 2-phase rea tor with two feeds, an
overow an ele tri al heating/ ooling devi e equipped with a tem-
perature sensor

a b

Ta Tb
n̂a|R n̂b|E
ŵe|h q̂h|R q̂E|R
e h R E s
n̂E|R q̂E|s

TR TE
n̂R|p n̂E|p

Figure 2.11: The abstra t representation of the plant with two


reservoirs as feeds (systems a and b), two lumps for the two im-
mis ible uid phases (R): rea tion phase and (E): the extra tion
phase, a single lump for the heater (h) and one for the temperature
sensor (s). The 2-phase overow goes to a separator (p).
2.6. SECONDARY ASSUMPTIONS 47

• Blue systems are not modelled mass systems.


• Purple systems require mass and energy balan es for their des ription.

The oloured dots indi ate the presen e of a spe ies for ea h olour. Here red
was used for spe ies A, green for spe ies B and green-brown for spe ies C. The
graph an be further stru tured into sub-graphs showing only the mass transfer
network, for example. Those an be further oloured to show the domain in
whi h the individual spe ies exist, given a set of assumptions about the dire -
tionality of ow (uni-dire tional or bi-dire tional) and the ability of transferring
a spe ies through a given interfa e. The latter abstra ts the semi-permeable
walls of thermodynami s. In our ase, the mass transfer between the rea tion
and extra t phase does not transfer spe ies A, whilst spe ies C is assumed to be
insoluble in the rea tion phase. These are obviously bla k/white assumptions:
the spe ies is transferred or not and the spe ies is soluble or not. These assump-
tions lead to a simpli ation of the model, as only the omponent mass balan es
for the present spe ies must be established in ea h primitive system. Similarly
if a spe ies is not transferred, no transfer law must be generated. Figure 2.12
spe ies A spe ies B spe ies C

a b a b a b

R E R E R E

P P P

Figure 2.12: More olouring: mass transfer network oloured for


the spe ies

The graph ontains now all the information to write the model with the ex ep-
tion of the transfer models and the rea tion model.

2.6.2.2 From Graphs to Equations

This graph, ombined with the des ription of its omponents represents a net-
work model, whi h an be written in a very ondensed form. The onstru tion
of the ondensed form is demonstrated on a network of minimal dimension that
ontains all omponents, namely a network of two systems being onne ted with
ea h other. Figure 2.13 shows su h a network assuming the two involved sys-
tems being lumped and onne ted by two onne tions ea h ommuni ating an
48 CHAPTER 2. MAPPING THE WORLD

extensive quantity ϕ̂ through the respe tive ommon part of the interfa e. Su h
a system is des ribed in Se tion 2.6.1.2.2 with the result in Equation (2.89)

Φ̇S := FS ϕ̂c + NTS ϕ̃S . (2.90)

Ω1
ϕ̂B|Ω
ϕ̂B|Ω 1 |A
1 |A

A B
ϕ̂A|Ω
2 |B ϕ̂A|Ω
2 |B

Ω2

Figure 2.13: Two systems ommuni ating through two pie es of the
ommon boundary.

Figure 2.13 illustrates the notation on an example of two onne tions being
dened between the two systems A and B. The arrows indi ate the dened
dire tions for ea h onne tion. The ow is then measured relative to these
dire tions. In the des ription of the system A, the ow ϕ̂B|Ω |A shows positive
1
in system A and negative in system B and inversely for the se ond dened ow
ϕ̂A|Ω |B through the se ond boundary element. The onservation equations for
2
the two stationary, lumped systems are:

Φ̇A = ϕ̂B|Ω − ϕ̂A|Ω + NTA ϕ̃A , (2.91)


1 |A 2 |B

Φ̇B = −ϕ̂B|Ω + ϕ̂A|Ω + NTB ϕ̃B . (2.92)


1 |A 2 |B

By sta king the two systems on top of ea h other,


! !
ϕ̂B|Ω |A NTA
    
Φ̇A +I −I 0 ϕ̃A
= 1
+ ,(2.93)
Φ̇B −I +I ϕ̂A|Ω |B
2
0 NTB ϕ̃B

the notation an be further ondensed:

Φ̇ = F ϕ̂ + NT ϕ̃ , (2.94)

The sta king up an be done for any number of systems. The result takes always
the form of equation Equation (2.94). The F-matrix is a dire tion onne tion
matrix with diagonal blo ks as before (Denition 2.78) but over all streams.
The stoi hiometri matrix of the omplete plant :
hh i i
N := diag NS . (2.95)
∀S

is a blo k matrix with the stoi hiometri matrix for ea h system as the respe tive
blo k.
2.6. SECONDARY ASSUMPTIONS 49

2.6.2.2.1 Adding Colours

The  olour denitions an be dire tly used. For the purpose of demonstration
let us use a simple olouring in whi h we indi ated mass, heat, work and heat,
as separate olours. In the network representation we shall use supers ripts to
indi ate the  olour: m for mass, n for omponent mass where appropriate, q
for heat and w for work. The network representation of a plant des ribed using
a state onsisting of omponent mass and enthalpy, thus assuming onstant
pressure, reads for the omponent mass balan e and the enthalpy of system s:
X
ṅs := αm Im n̂m + NTs ñs , (2.96)
∀m
X X X
Ḣs := αm Ĥm + αq q̂q + αw ŵw . (2.97)
∀m ∀q ∀w

where m, q and w , are used as the running sum parameter.


The feature that mass ow indu es energy ow is learly visible in the equa-
tions. This part is what often is referred to the onve tive heat stream, whilst
heat ondu tion and energy transfer by the means of radiation is aptured in
the q̂q streams. It should be noted that in this formulation the inje tion and
eje tion volume work, asso iated with ea h mass stream, is in orporated into
the enthalpy. The work terms in the last sum thus omprise the remainder of
the work terms, namely me hani al work, su h as input through a mixing ele-
ment, for example. The volume work term of the system is in the a umulation
term. The αm , αq , αw implement the referen e o-ordinates as indi ated in the
abstra tion of the pro ess as a dire ted graph.
Casting this representation into matrix equations gives:

ṅs := Fns n̂s + NTs ñs , (2.98)


Ḣs := fm
s Ĥs (n̂s ) + f qs q̂s + fw
s ŵs . (2.99)

With the ve tor n̂s being the sta k of the ows atta hed to the system s, the
matrix Fns ontains the non-zero elements of the sth row of the adja en y matrix
for the mass- oloured graph. Allowing for more than one spe ies, the row is
a set of equivalent rows with one for ea h spe ies. Thus the index s. The row
ve tor f m
s is the s alar version of Fs as the energy is a s alar quantity, but
n

asso iated with the same mass streams. Ea h enthalpy ow is al ulated as a
mixture property of the respe tive stream. The abstra tion is done similarly for
the other  olours, namely the heat and the work.
The sta king an be extended over the number of systems to obtain the intrigu-
ingly ompa t representation:

ṅ := Fn n̂ + NT ñ , (2.100)
Ḣ := m q
F Ĥ(n̂s ) + F q̂ + F ŵ . w
(2.101)

The graph matri es are now the omplete adja en y matri es of the respe tively
 oloured sub-graph of the overall representation of the plant in a graph of
apa ities onne ted by the transfer of extensive quantities with the apa ities
having the ability to transpose extensive quantity. The stoi hiometri matrix of
50 CHAPTER 2. MAPPING THE WORLD

the network is a blo k diagonal matrix with the stoi hiometry of ea h system in
the respe tive blo k. For the implementation this representation an be further
abstra ted into a global stoi hiometri matrix whi h is index-mapped to form
the blo k diagonal matrix being used here for simpli ity of the equations.

2.6.3 Handling Complexity


Using these two basi omponents, the plant model is primarily a olle tion
of ommuni ating subsystem. This on ept an be extended to a hierar hi al
representation, in whi h ea h subsystem may again onsist of a network of sub-
system. That is, it an be subdivided into subsystems that ex hange extensive
quantities, ea h of the subsystems being again dividable into ommuni ating
subsystems et . The result is a stri tly-hierar hi al tree of systems with the
assembly of leaves representing the total relevant universe.

2.7 Three Extreme Dynami Assumptions


On e one has a network of ommuni ating apa ities aptured into a set of
equations, it is quite ommon to introdu e another set of assumptions applied
to the three terms in the energy balan e, namely the a umulation term, the
transport term and the transposition term (see also Se tion 2.1.4) A simple
pro ess dep ited in (Figure 2.15) serves as an illustration.

2.7.1 Fast and Slow Capa ities


The rst of the three key assumptions is to assume that a apa ity is mu h
smaller than the others. This introdu es two time s ales, namely a fast and a
slow one. The literature dire tly relating to hemi al engineering often uses the
term pseudo-steady-state if one is primarily interested in the slow dynami s. The
pattern ts nearly the standard singular perturbation (Se tion 7.5). It requires
a slight modi ation of the onservation equations. Though the modi ation is
simple, it is not without subtleties.
For the purpose of illustration, let us assume a plant onsisting of two lumps
onne ted to a reservoir e, (Figure 2.15) a large lump, labelled with h and
a small one labelled with s. Between the two we have a ow ϕ̂s|h and the
reservoir supplies δ ϕ̂e|s . The balan es for the two systems and the onserved
quantity Φi then read:

Φ̇ih := −ϕ̂ih|s + ϕ̂ie|h , (2.102)


Φ̇is := +ϕ̂ih|s . (2.103)

The two balan e equations are normed usually with an extensive quantity that
is not hanging with time in the given pro ess. Often this is the volume. Thus
let ϕj be this extensive, time- onstant quantity, then the following intensive
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 51

rea tants
a b

ontents
e h R E s

rea tor
energy

separation
P

rea tor
rea tants ontents s

energy h separation

rea tor
rea tants rea tor

energy separation

Figure 2.14: Tree stages of hierar hi al abstra tion. First lumping


ontents, feed se tion, energy sour es and down stream. Next the
rea tor.
52 CHAPTER 2. MAPPING THE WORLD

ϕ̂e|h ϕ̂h|s
e h s

Figure 2.15: A small apa itiy (s) ommuni ating with a large a-
pa ity (h) onne ted to an environment

quantity is dened:

Φi
ξ ij := . (2.104)
ϕj
Applying the transformation to the two onservation equations observing that
ϕj := onstant , one gets:

ϕjh ξ˙hij := −ϕ̂ih|s + ϕ̂ie|h , (2.105)


ϕjs ξ˙sij := +ϕ̂ih|s , (2.106)

whi h is a singular perturbation problem in standard form (Se tion 7.5).

2.7.1.1 Outer Solution

For the outer solution, we rst observe that

lim ϕjs ξ˙sij := ϕ̂ih|s (2.107)


ϕjs →0

0 := ϕ̂ih|s , (2.108)

Thus assuming that Tihomov's ondition is satised, the singularly perturbed


system (Se tion 7.5.1.1) is:

ϕjh ξ̇hij := ϕ̂ie|h , (2.109)


Φ̇ih := ϕ̂ie|h . (2.110)

(Figure 2.16) The se ond of the above equations indi ates that the norming

ϕ̂e|h
e h

Figure 2.16: Assuming the small apa itity to be negligible in a-


pa ity makes it to disapper.

of the slow equation is not required obtaining the outer solution. It is though
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 53

illustrative for reasons of omparing the large with the small apa ities. The
solution is obtained by simple integration.
Z t
Φ(T ) := ϕ̂ie|h dt + Φ(0) . (2.111)
0

Obviously to exe ute the integration the model would have to be ompleted by
adding the des ription for the transfer ϕ̂ie|h .

2.7.1.2 Inner Solution

The inner solution is obtained by stret hing the time s ale (Se tion 7.5.1.2):
t
τ := . (2.112)
ǫ
and setting ǫ := ϕjs one nds
Z t+∆t
ξ̇sij (t) := ϕ̂ih|s (Φih (t)) dτ ‘ . (2.113)
t

The inner solution has thus the state of the slow system as input whi h implies
that the fast system approa hes the urrent state of the slow system in the short
time s ale. (Figure 2.17)

ϕ̂h|s
h s

Figure 2.17: The small apa ity approa hes the equilibrium with
the large apa ity qui kly. The large apa ity a ts as a reservoir.

The outer solution is probably more often required than the inner, though with
the growing interest in multi-s ale systems, interest is orrespondingly shifting.
Never-the-less, let us have a loser look at the outer solution for a larger system
next.

On the slow time s ale the fast system is  oales ed  by the fast one.
On the fast time s ale, the slow one stands still.

2.7.2 Fast Transfer


The se ond ommon assumption is that of fast transfer. This is often used
when one la ks knowledege about a parti ular stream ex ept than that it is fast
54 CHAPTER 2. MAPPING THE WORLD

ompared to the relevant dynami s. Again we start with the sample pro ess
(Figure 2.15), but in this ase we state the assumption that

slow transfer ϕ̂s := ϕ̂ie|h , (2.114)


fast transfer ϕ̂f := ϕ̂ih|s . (2.115)

The nature of the extensive quantity is not relevant, thus we also drop the
supers ript to simplify the notation.

Φ̇h := −ϕ̂f + ϕ̂s , (2.116)


Φ̇s := +ϕ̂f + . (2.117)

Further for the purpose of illustration let the transfer laws be given by:

ϕ̂i := −Θi ∆πi i ∈ {s, f } . (2.118)

where the parameter for the fast transfer is mu h larger than the parameter
for the slow transfer, thus Θf >> Θs . Dividing the onservation by the fast
transfer parameter yields:
Θs
Θ−1
f Φ̇h := ∆πf − ∆πs , (2.119)
Θf
Θ−1
f Φ̇s := −∆πf . (2.120)

And
 
Θs
lim Θ−1
f Φ̇h := lim ∆πf − ∆πs , (2.121)
Θf →∞ Θf →∞ Θf
lim Θ−1
f Φ̇s := lim (∆πf ) . (2.122)
Θf →∞ Θf →∞

Thus

0 := ∆πf . (2.123)

The assumption thus results the equilibrium ondition for the boundary meaning
that the onjugate to the potential is equal on the two sides of the boundary.
With the ondition given in Equation (2.28) the argument holds for a general
transfer law.

Assuming instant transfer results in the two onne ted systems being at
equilibrium with respe t to the for e driving the instant transfer.

Equilibrium relation introdu es an algebrai link between state variables and


thus introdu e an index problem into the des ription asking for a orresponding
redu tion in the state spa e. This an be a hieved by eliminating the unknown
fast transfer. This is a hieved by adding the two systems together, whi h is
the overall balan e of the two systems:

Φ̇s+h := ϕ̂s . (2.124)

Thus the resulting topology Figure 2.18 is now slightly dierent from Figure 2.16
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 55

ϕ̂e|h
e h+s

Figure 2.18: Assuming fast transfer between the systems h and s


yields a slightly dierent redu ed system than assuming negligible
apa ity Figure 2.16

in that the apa itity of the small system is in luded.


By eliminating the fast transfer, the state of the model hanges from being two
dimensional to being one dimensional, thus the state is being redu ed and it
appears that one has lost the information of the state of the individual system.
This is not the ase, be ause the equilibrium relation provides together with
the relation linking π to the state. As an be seen from their denition in
Se tion 2.4.1.1 they are of the same dimension and the problem redu es to a
root nding problem.

Knowledge of the individual state an be re onstru ted from the


equilibrium ondition.

2.7.3 Fast Transpositions


The arguments for handling fast transpositions is similar to the arguments used
to abstra t systems with fast transportation terms only that the transport pa-
rameter is repla ed by the rea tion parameter rea tion onstant, whi h is not
really onstant but a strong fun tion of the temperature. The main dieren e to
the transport ase is though, that the transposition is o urring within a system
and not between two systems. Thus it is su ient to study a single system for
this ase. Let its onservation be

ẋ := F x̂ + S x̃ . (2.125)

and the transposition be des ribed by pairs of rea tions, one forward, and one
ba kward. Equilibrium is approa hed when the forward rea tion (index f) is
equal to the ba kward rea tion (index b) with both rea tion onstants being
large. This result is readily obtained by s aling the onservation with one of
the two rea tion onstants and applying the singular perturbation argument
analogue to the argument used for the fast transfer.
For a pair of forward, ba kward rea tions, the S matrix, being the transposed
of the stoi hiometri matrix as it is usually dened, in ludes the two ve tors of
the stoi hiometri oe ients, thus

(2.126)
 
S := ν f ν b .
56 CHAPTER 2. MAPPING THE WORLD

The transposition is

x̃ := V K g(y) , (2.127)

where the fun tion g(y) ree ts the dependen y of the transposition on the se -
ondary state. In the ase of hemi al rea tions, this fun tion is usually a power
fun tion of the involved spe ies' on entration with the respe tive stoi hiometri
oe ient as a power oe ient. An extensive quantity usually enters be ause
the transposition rate is normed by this extensive quantity, often the volume
but it an also be the area if one talks about an a tive surfa e. The matrix K
is a diagonal matrix with the rea tion onstants.
As suggested, labelling the forward and ba kward rea tion with the index f and
b respe tively the resulting expression for spe ies s is νs kb gb (y) + νs kf gf (y).
The onservation is then of the form:

ẋ := F x̂ + V ν b kb gb (y) + V ν f kf gf (y) . (2.128)

Sin e

ν b := −ν f , (2.129)

s aling the equation with one of the two large rea tion onstants gives
 
kb
kf−1 ẋ := kf−1 F x̂ − V ν f gb (y) − gf (y) . (2.130)
kf

Thus taking the limit and observing that the two rea tion onstants are of the
same order of magnitude one nds:
 
−1 −1 kb
lim k ẋ := 0 := lim kf F x̂ − V ν f gb (y) − gf (y) .
kf ,kb →∞ f kf ,kb →∞ kf
(2.131)

Consequently:
 
kb
0 := gb (y) − gf (y) . (2.132)
kf

assuming that the stoi hiometri oe ient is not equal to zero. With y usually
being the omposition, this rea tion-equilibrium equation provides an algebrai
link between the on entration of the spe ies involved in the rea tion. For
example if the rea tion is A → B and y := c the ve tor of on entrations and

gb (y) := cγBb , (2.133)


γ
gf (y) := cAf . (2.134)

Then one gets for the equilibrium ondition:

kf (T ) cγb
:= γBf . (2.135)
kb (T ) cA
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 57

Given the rea tion, the power oe ients are likely to be 1, thus γf := γb := 1.
kf (T ) cB
:= . (2.136)
kb (T ) cA
Interesting is in this ontext to observe that the non-linearity in the temper-
ature of the rea tion onstants ompensate ea h other to some extent. Thus
the rea tion equilibrium is mu h less a fun tion of the temperature than the
individual rea tion  onstants.
The rate adjusts to the equilibrium ondition, thus is not known and eliminated
from the onservation equation through a null-spa e al ulation:
Given the onservation for a system, one splits the rea tions into a set of fast
and a set of slow rea tions:

ẋ := F x̂ + Ss x̃s + Sf x̃f . (2.137)

In order to eliminate the fast rea tions one multiplies with a matrix:

Ω ẋ := Ω F x̂ + Ω Ss x̃s + Ω Sf x̃f . (2.138)

with Ω su h that Ω Sf := 0. The reader should noti e that this elimination op-
eration, whilst similar, is dierent to the elimination of ows. When eliminating
fast rea tion, the result is a linear ombination of spe ies masses in the ae ted
system, whilst for the ow elimination, the spe ies mass of the two onne ted
systems are added. The fast rea tion assumption does not ae t the hydrauli s
of the pro ess, but forms invariant spe ies groups. A good example for su h a
system is a a id-alkali rea tion, whi h are very fast ompared to many other
types of rea tions [3℄.

Assuming instant rea tion in a two-way rea tion system results in a


rea tion equilibrium requiring a respe tive redu tion of the state spa e as
to avoid index problems.

2.7.4 Outer Solution for a Network of Fast and Slow Sys-


tems
For the purpose of this exposition, let us dene two sub-networks, one slow and
one fast, indi ated by the two indi es s, f :

Φ̇s := Fs ϕ̂s + Ff s ϕ̂sf + Ss Φ̃s , (2.139)


Φ̇f := Ff ϕ̂f + Fsf ϕ̂sf + Sf Φ̃f , (2.140)

with the graphs Fs , Ff , Fsf = −Ff s 11 being the dire tion matri es for the slow
internal streams, the fast internal streams and the streams oupling the fast and
the slow sub-networks.
11 Noti e the relation between the two graph matri es onne ting the slow and the fast
sub-networks
58 CHAPTER 2. MAPPING THE WORLD

Norming the onserved extensive quantities with a time- onstant extensive quan-
tity:
Φi
ξ ij := , (2.141)
ϕj
he king the validity of the Tihomov ondition, the fast network redu es to:
0 := Ff ϕ̂f + Fsf ϕ̂sf + Sf Φ̃f . (2.142)

Thus the two sets are:


Φ̇s := Fs ϕ̂s − Fsf ϕ̂sf + Ss Φ̃s , (2.143)
0 := Ff ϕ̂f + Fsf ϕ̂sf + Sf Φ̃f , (2.144)
Φ̇s := Fs ϕ̂s + Ff ϕ̂f + Ss Φ̃s + Sf Φ̃f . (2.145)

Looking at the very ommon ase, where the fast sub-network has only fast
internal ows and no transposition, the problem redu es signi antly:
Φ̇s := Fs ϕ̂s + Ss Φ̃s . (2.146)

The singular perturbation removes the fast state variables from the representa-
tion. No relation between the state of the fast and the state of the slow system
results from this manipulation. Thus if the ex hange of extensive quantity be-
tween the fast system and the environment is not measured, but only known as
a fun tion of the state of the fast system and the state of the environment, the
model is not omplete.
This assumption is often introdu ed when one knows all the streams in and out
of the fast system ex ept one, assuming here that all the stream ve tors have the
same dimensionality as the state. Knowing all the other streams, the steady-
state balan e equation enables the omputation of one stream. This argument
is to be adjusted if dimensionalities of the various onne tions do not mat h the
dimensionality of the system's state.

2.7.5 Assumptions on Assemblies


It is not un ommon that one has knowledge about a state-dependent quantity of
an assembly of primitive systems and thus stimulates making and implementing
an assumption. A well-known example is the assumption of onstant, known
volume of multiphase systems en losed in a ommon onnement.
Given the standard network model
ẋ := F x̂ + S x̃ , (2.147)
one an split the network into two subse tion thereby isolating the part for whi h
the assumption shall be made. Let matrix Pa be a sele tion matrix that is non-
square and isolates the part for whi h an assumption shall be made. Further let
Ω be a matrix of the dimension k x n , then it a typi al assembly assumption is

Ω Pa ẋ := Ω Pa F x̂ + Ω Pa S x̃ := 0 . (2.148)
2.7. THREE EXTREME DYNAMIC ASSUMPTIONS 59

that is a linear ombination of the states is onstant. This then denes k alge-
brai onstrains providing equations for k dependent algebrai variables. The
above equations may be used to determine a set of dependent quantities. Bi-
partite graph analysis an here help to determine the set of possible quantities
that an be determined in a spe i ase. Further, the above equations an be
added to the other part thereby eliminating the onne ting streams, but provid-
ing the opportunity to possibly ompute quantities that depend on the algebrai
onstraints.

2.7.6 Assumptions in the Spa e of the Se ondary States


The network models, being formulated in the spa e of the onserved quantities,
whi h we term the primary state spa e, an be transformed into se ondary state
spa e by means of state variable transformations. In fa t in hemi al engineering
models in the se ondary states are more ommon than in the primary, be ause
substituting as early as possible is onsidered a good mathemati al praxis. Thus
one usually does not use the models in the primary state spa e, whi h is also
a minimal spa e. The approa h dis ussed here represents therefore a deviation
from the standard hemi al engineering pra ti e.
The transformation an be formalized readily (Se tion 2.5.2) yielding
Jyx ẋ := Jyx F x̂ + Jyx S x̃ . (2.149)
with:
∂y
Jyx := . (2.150)
∂xT
resulting in a transformed model:
ẏ := Fy x̂(y) + Sy x̃(y) . (2.151)
From this point on one an implement the same assumptions as they were
dis ussed above. Very ommon is the assumption of onstant volume for single
systems or assemblies.

2.7.7 Unmodelled Components


When modelling a plant it is quite ommon that one does not have mu h of a
lue on what pre isely happens in a plant, but may have a thought on what the
ee t is. Two very ommon problems are that one does not know on how to
model a ow pre isely or what rea tion is taking pla e. However, one knows that
ertain things are being ontrolled, for example the temperature, or that the
volume is approximately onstant in an overow situation et ., or the apa ity
ee ts an be negle ted.
Thus the before mentioned simpli ations an be used to determine the miss-
ing model omponents by means of making order-of-magnitude assumptions.
One may assume the apa ity ee t is zero, or a se ondary respe tive-dependent
state variable is onstant, or event dynami s on the ow in question yields an
algebrai ondition for the missing stream information.
60 CHAPTER 2. MAPPING THE WORLD

2.8 Link to Control-Related System Theory


The modelling that has been dis ussed so far is done in the time domain and
aims at des ribing the pro ess for a lass of appli ations asso iated with un-
derstanding the fun tioning of a pro ess and onsequently its use for design
and operations in the widest sense. The respe tive theoreti al eld is usually
referred to as system theory looking into the properties of models and the theo-
reti al side of their appli ation. System theory introdu es as any other eld its
own notion of how to look and des ribe the obje t of its interest, here termed
system, and the methods it uses for analysis and design.
Having des ribed the plant in the time domain the state is thereby introdu ed
as a ore on ept. The state was simply dened as the ve tor of the onserved
quantities, whi h yields a minimal representation, that is, the number of state
variables is minimal in order to apture the behaviour of the system. State
transformation then allow for hanging the des ription, whereby the hange
is usually motivated by getting a dierent insight or a representation that is
parti ularly well suited for the target mathemati al method. Motivation of the
transformation is thus often method driven.
System theory is interested on how a mathemati al obje t behaves thereby har-
a terising its properties. It thus stru tures the model for this purpose and de-
nes besides the state two more entral obje ts, namely the input, the quantity
ae ting the state and the output, being often the observation or measurement.

2.8.1 Plant and Its Environment


A plant is intera ting with its environment, whi h also in ludes the human-
designed ontrollers and observing instruments. This pattern an be aptured
in an abstra tion as shown in gure Figure 2.19. The plant is ae ted by
the two streams, whereby the two streams are driven by the dieren e in the
respe tive driving for e, being in turn a fun tion of the state of the respe tively
onne ted systems. Thus if one looks at the physi al system only, the plant is
driven by the dieren e in the state between the plant and the environment at
the respe tive lo ation of the onne tion.
The ontroller gets the information about the state of the plant and the state of
the environment as an input. In addition, one ould also introdu e observations
of the ows between the two systems. But having dened that the streams have
no apa ity, one an just as well argue that this information is available at the
two ends, namely in the plant at the parti ular lo ation and in the environment
where the streams are onne ted. The ontroller gets a set point from the
outside whilst manipulating the resistan e in the stream, here shown as with a
valve symbol. In all ases, the manipulation is atta hed to the streams, though
one an onstru t asseblies onsistent of reservoirs and ideal ontrollers that
ontrol a state of a supply system whi h then is onne ted to the plant.
Using the term input as what is physi ally driving, it is the onjugate to the
potential of the environement driving the physi al transport (being the y's in
Equation (2.43)), whilst the input manipulating the ow is the resistan e or its
2.9. SUMMARY 61

setpoint

state info environment state info plant


ontroller

manipulate

environment an be manipulated plant

annot be manipulated

Figure 2.19: Abstra ting the intera tion of a plant with its envi-
ronment where one set of intera tions are free meaning annot be
ontrolled, whilst the other intera tion an be manipulated. The
ontroller gets information from both partners, the environment
and the embedded plant.

inverse being manipulated by the ontroller (being the pa|b see Se tion 2.5.1).

For the ontroller the term input is used for the observations of the state and
the setpoint information, whilst the output is the manipulated variable ae ting
the ow between the two systems.
Figure 2.20 shows the blo k diagram as it evolves from Figure 2.21. Note how
the ontroller gets the se ondary state as input and omputes properties of the
transport equations. The model, whilst linear in transport and transposition, is
nonlinear in the variables the ontroller manipulates and also not in the state.
Even worse, there are likely impli it algebrai equations involved.

2.9 Summary
2.9.1 The Primary Model
2.9.1.1 Network of Distributed Systems

This rst hapter introdu ed the on ept of a basi mapping of a physi al system
into a set of mathemati al obje ts that form the basi model. Mathemati ally
this representation is a distributed one, meaning that the dynami system is
not only a fun tion of time, but also of the spatial o-ordinate. The fundamen-
tal des ription is based on the onservation laws, thus the fundamental state
is formed by the onserved quantities, the fundamental extensive quantities,
62 CHAPTER 2. MAPPING THE WORLD

ys c(y, ys )

uc ∈ pt

x̂  
F t y, pt

t=0
x(0)

ẋ x   y
R
·dt 0 := s x, y, ps

x̃  
R r y, pr

Figure 2.20: Blo k diagram of a ontrolled pro ess. The ontroller


gets state information and manipulates the ow by hanging pa-
rameters in the transfer law.

typi ally omponent mass, total energy and momentum.

2.9.1.2 Transport and Internal Transposition

In a rst step, this des ription is augmented with two more omponents, namely
the internal dynami s, whi h des ribe the transposition of one onserved
quantity into another one, rea tions for example, and transfer of the extensive
quantities within the system and a ross its boundaries. Latter are often phase
boundaries and are hara terised in dis ontinuities in a set of intensive variables
whilst the fundamental extensive variables are ontinuous. Also the derivatives
of the total energy with respe t to the natural variables ( omponent mass, vol-
ume, entropy) yields the respe tive onjugated variables whose gradient repre-
sent the driving for es for the transfer of the respe tive fundamental extensive
quantity. Two onditions: there is no transfer if the driving for e is
zero and the ow dire tion is determined by the sign of the driving
for e: the system tends towards a uniform energy distribution.
2.9. SUMMARY 63

2.9.2 Simpli ations


2.9.2.1 No Internal Mixing vs. Dominated by Internal Mixing

These are the two main extreme ases.


If there is no internal mixing in a given dire tion the pro ess is simply
ow-through in this dire tion: The extensive quantity omes in and ows in
a given pattern out again having potentially undergone an internal onversion
and ex hanged in extensive quantity in the other dire tions with the environ-
ment. The system behaves in the respe tive dire tion as a transport lag with
transposition and ex hange in the other dire tions.
If internal mixing dominates and the ow in and out of the system is rela-
tively slow and small, then the system may be seen as internally uniform on a
longer time s ale. The system behaves as a uniform lump.
In hemi al engineering the rst type is referred to as a plug ow, whilst the
se ond is ideally stirred or ideally mixed.

2.9.2.2 Order of Magnitude Assumptions

Three assumptions are ommonly made all of whi h onvert to time s ale as-
sumptions:

• Small vs. large apa ities: being usually interested in the longer time
s ale this leads in a rst instan e to a event-dynami assumption for the
fast parts thereby eliminating the state of the fast system. This enables
the omputation of streams in or out of the fast part for whi h one has no
model available.
• Fast vs. slow transport: in the short time s ale this eliminates the slow
transport with the dynami s ompletely dominated by the fast transport.
In the slow time-s ale, the fast transport for es the lo al equilibrium for
the quantity that is transported fast. This forms an algebrai link between
the states of the two fast- oupled systems requiring an a ording redu tion
of the state.
• Fast vs. slow transposition: in the short time s ale it is the trans-
position that dominates. It usually requires an equally fast supply of the
quantities being transposed into ea h other. On the long time s ale a lo al
equilibrium is a hieved as the transposition goes both ways. This forms
an algebrai link between state variables requiring an a ording redu tion
of the state.

2.9.2.3 Modelling Patterns

2.9.2.3.1 Step by Step

Going step by step results a proper model, guaranteed!


64 CHAPTER 2. MAPPING THE WORLD

• Step 0 - Design a physi al topology: This rst operation maps the


real-world plant into a network of apa ities and onne tion. It is the main
and riti al operation, as it limits the des riptive power of any derived
model. Main aspe t is to think about relative dynami s of the omponents
and what is being ontrolled and observed. What apa ity ee ts an
be negle ted, what an be lumped and what needs to be handled as a
distributed system. In lude all knowledge into this graph. Add olours
for distin t quantities, su h as omponent masses, energy transport in
form of heat and work.
• Step 1 - Dynami s: Establish balan es of the onserved quantities for
ea h system. This represents the dynami s of the plant. The olours map
into the network representation dire tly. The variables in the a umula-
tion terms are the states, the fundamental state variables, as they span a
minimal spa e from whi h all the other state-dependent information an
be derived.
• Step 2a - Transfer: Add transfer equations. These equations link sys-
tems with ea h other, make the state of one a fun tion of the other. The
driving for es enter the des ription if the transport is not simply mea-
sured in one or the other quantity, in whi h ase, only transformations are
needed. The driving for es are a fun tion of the state, thus are lassied
as se ondary states.
• Step 2b -Transposition: The kineti laws des ribing the dynami s of
the transposition are usually empiri al des riptions that, be ause transpo-
sition o urs inside the system, are dependent on the state of the system
only.
• Step 3 State variable transformations: The se ondary state vari-
ables introdu ed in the transfer laws and the kineti s are to be obtained
by from the primary state through appropriate state-variable mappings.
This in ludes primarily material des riptions, details of the kineti s and
geometry. Relations are often impli it. Notoriously temperature and geo-
metri al quantities.

2.9.2.3.2 Not Knowing it All

The model for a single system embedded in its environment takes the form:

dynami s ẋ := F x̂ + R x̃ , (2.152)
transport x̂ := t(y, ye ) , (2.153)
transposition x̃ := r(y) , (2.154)
state var trans 0 := s(y, x) (2.155)

Case 1: Some of the ows are measured:

• Split ows into two sets, measured and modelled.


• No further a tion required.
2.9. SUMMARY 65

Step 0: abstra tion


Step 1: balan es
Step 2: transport and rea tion
Step 3: state variable transformations

x̂  
F t y, pt

t=0
x(0)

ẋ x   y
R
·dt 0 := s x, y, ps

x̃  
R r y, pr

Figure 2.21: Signal blo k diagram view of the equations represent-


ing the un- ontrolled pro ess. Constru ting the model goes through
4 steps: Step 0 is the initial mapping of the pro ess into a graph
onsisting of apa ities and onne tions, thus viewing the plant as
a network of ommuni ating ontrol volumes. Adding " olours"
identies the relevant extensive quantities.
Step 1: Generate all relevant balan es.
Step 2a: Provide a des ription for ea h stream of extensive
quantity in luding dire tionality relative to the one dened in the
dire ted graph representing the ontrol volume network.
Step 2b: Provide a des ription for ea h transposition kineti .
Step 3: Provide mappings of the state dened by the onserved
quantities to the se ondary state variables dene in step 2, that is
transfer and transposition.
66 CHAPTER 2. MAPPING THE WORLD

Case 2: Some ows are not known, not modelled nor measured.

• Split ows into known and unknown ones.


• Option 1: Che k if a small apa ity assumption (pseudo steady state) for
the system or parts of the system is appropriately des ribing the situa-
tion by omparing the system with its environment, in terms of relative
dynami s. If appli able he k if the singular perturbation approa h pro-
vides enough algebrai equations to re onstru t the missing ows (often an
one-to-one relation, meaning one unknown ow, one pseudo-steady-state
assumption).
• Option 2: Assuming fast ows enables the ombination of the system with
its environment in the quantity for whi h fast transfer an be assumed.
Again the relative magnitude of the unknown transfer and the other trans-
fers ae ting the system for the quantity in question must be ompared.
This introdu es an equilibrium ondition, from whi h the state and the
ow an be re onstru ted.

Case 3: Some apa ity ee ts are negligible: This yields dire tly the singular
perturbation of the respe tive a umulation terms, thus eliminating those states
from the dynami s. This is the basi assumption made when mapping a transfer
system into a simple resistan e.
Case 4: Some se ondary states are onstant. This leads to algebrai onstraints
imposed on the fundamental state and requires a state spa e redu tion. Should
always be leared out before simulating the pro ess as it is it generates a model
of higher dierential index, whi h ause problems when integrating. Su h prob-
lems an also o ur dynami ally as for example a ontrol a tion an stop the
dynami s of a part of a system.
Chapter 3

Approximating Distributed
Systems

Distributed systems, des ribed by partial dierent equations, an rarely be


solved in losed form. In most ases one has to resort to numeri al solutions,
whi h are using one or the other approximation. Solving distributed systems
has be ome a dis ipline on its own within the respe tive appli ation eld, su h
as omputational uid me hani s. What follows an onsequently not be om-
prehensive. The purpose is to give a feel on what dis retisation is all about and
view it from the stru tural point of view namely as a state redu tion method.
Latter is also the reason distributed systems are alled innite-dimensional sys-
tem, whilst the approximations lead to nite-dimensional system. The literature
dealing with this subje t is primarily asso iated with numeri al mathemati s,
as the approximations are most frequently used to solve distributed models nu-
meri ally (Hildebrand, 1956; S hwarz, 1989; Atkinson, 1989).
The main idea is to dis retise the spa e of the independent o-ordinates, thus
time and spatial o-ordinates. Most ommonly one dis retises in the spatial
o-ordinates rst as the numeri al integrators dis retise intrinsi ally in the time
o-ordinate. It should be noted though that the two dis retisations are not inde-
pendent, as they an be interpreted as time-s ale and length-s ale assumptions
whi h are intimately linked by the very nature of things.

3.1 Finite Dieren e Approximation


The approximation an be applied to any derivative, in time as well as in spa e,
et . The mathemati al basis is a polynomial approximation, spe i ally, Taylor
approximations. Taking a spatially dependent des ription the dis retisation in
the spatial domain shall be dis ussed rst.
The idea of any dis retisation method is to introdu e a grid of points in the
domain to be approximated. Idea is to make the equations to des ribe the
behaviour of the system at ea h point only instead of a ontinuum in the domain.

67
68 CHAPTER 3. APPROXIMATING DISTRIBUTED SYSTEMS

The grid, whilst often of onstant grid width, does not have to be onstant. In
fa t looking at the estimation errors, it is interesting to adapt the grid to the
hanging fun tion being approximated, thus distribution and dynami s.
Sin e the purpose is to introdu e and explain the ore idea and ba kground of
the method, the simplest ase is hosen, namely a one-dimensional problem and
a onstant grid. For the purpose of demonstration, let x be a state and r a s alar
independent variable, thus dxdr is a s alar rst derivative of x with respe t to r
d2 x
and dr2 be the orresponding 2nd derivative. Let further rk denote k th point
in the one-dimensional grid. Having the obje tive to approximate 2-nd order
derivatives, the minimal number of approximation points is three. A generi
set of points is dened labelling the three points with the subs ript 0,1,2 with 0
indi ating the point k-1, 1 the point k, and 2 the point k+1. In ea h point the
state fun tion an be extended in a Taylor series:

n
1 ∂ i x ∂ n+1 x n+1

X 1
x(rk + h) := h i
+ h . (3.1)
i:=0
i! ∂ri rk (n + 1)! ∂rn+1 ξ

Making two approximations for ea h point provides six equations enabling to


solve for the rst and the se ond derivatives. The solutions are obtained easily
by taking the dieren e and the sum of the two equations that ontain the
desired approximate derivative. In the ase of taking the sum, the zero-th and
the even derivatives remain, whilst in the ase of taking the dieren e, the
odd derivatives are eliminated. This ree ts into the error estimates for the
approximations.
The six equations are, not showing the error terms:

1 ∂ 2 x

∂x
(−h)2 + . . . , (3.2)

x0 := x1 + ∂r r1 (−h) +
2 ∂r2 r1
1 ∂ 2 x

∂x
(−2h)2 + . . . . (3.3)

x0 := x2 + ∂r r2 (−2h) +
2 ∂r2 r2
1 ∂ 2 x

∂x
h2 + . . . , (3.4)

x1 := x0 + ∂r r h +
0 2 ∂r2 r0
1 ∂ 2 x

∂x
(−h)2 + . . . , (3.5)

x1 := x2 + ∂r r2 (−h) +
2 ∂r2 r2
1 ∂ 2 x

∂x
2h2 + . . . , (3.6)

x2 := x0 + ∂r r0 2h +
2 ∂r2 r0
1 ∂ 2 x

:= x1 + ∂x h2 + . . . , (3.7)

x2 h
∂r r1 +
2 ∂r2 r1

Assuming a onstant grid, the grid onstant is denoted by h, whi h further


simplies the writing. Choosing the appropriate pairs, one extra ts the rst
and se ond derivative at one of the three points. Taking the pair Equation (3.2)
3.1. FINITE DIFFERENCE APPROXIMATION 69

and Equation (3.7) and ignoring the error terms for the time being :

∂x
x0 − x2 := −2 h, (3.8)
∂r r1

∂x x0 − x2
:= − . (3.9)
∂r
r1 2h

For the se ond derivative one nds:

∂ 2 x

x0 − 2x1 + x2
:= . (3.10)
∂r2 r1 h2

For the error terms of the rst derivative one nds:


3 3
3
!
h ∂ x ∂ x
O(h2 ) := − + , (3.11)
3! 2h ∂r3 ξ01 ∂r3 ξ21
h3 ∂ 3 x

:= − 2 , (3.12)
3! 2h ∂r3 ξ02
h2 ∂ 3 x

:= − . (3.13)
3! ∂r3 ξ02

3
where ξab is the value of r ∈ [ra , rb ] with ∂∂rx3 is maximal.
ξ02

For the se ond derivative trun ation only o urs at the 4-th order term:
!
h4 ∂ 4 x ∂ 4 x

2
O(h ) := − + , (3.14)
4! h2 ∂r4 ξ01 ∂r4 ξ21
h2 ∂ 4 x

:= − . (3.15)
12 ∂r4 ξ02

The pair Equation (3.4) and Equation (3.6) yields the two approximations for
the derivatives at r0 and nally the pair Equation (3.3) and Equation (3.5) gives
the two at r2 .
The following table lists all the three point approximations:
Derivative Approximation Error Estimate
∂x 1 h2 ∂ 3 x

∂r r0 2h (−3x0 + 4x1 − x2 ) 3 ∂r ξ3

2 3

∂x 1
− h6 ∂∂rx3

∂r r1 2h (−x0 + x2 )
2 3
ξ
∂x 1
− h3 ∂∂rx3

∂r r2 2h (x0 − 4x1 + 3x2 )
ξ
∂2 x 1 3 2 4
2
∂r h2 (x0 − 2x1 + x2 ) −h ∂∂rx3 + h6 ∂∂rx4
r0 ξ1 ξ 2
∂2 x 1 2 4
2
∂r h2 (x0 − 2x1 + x2 ) − h12 ∂∂rx4
r1 ξ
∂2 x 1 3 2 4
2
∂r h2 (x0 − 2x1 + x2 ) −h ∂∂rx3 + h6 ∂∂rx4
r2 ξ1 ξ2
70 CHAPTER 3. APPROXIMATING DISTRIBUTED SYSTEMS

Similarly one an get approximations using four points:


Derivative Approximation Error Estimate
∂x 1 h3 ∂ 4 x

∂r r0 6 h (−11x0 + 18x1 − 9x2 + 2x3 ) 4 ∂r 4

2 4
ξ
∂x 1
− h12 ∂∂rx4

∂r r1 6 h (−2x0 − 3x1 + 6x2 − x3 )
2 4
ξ
∂x 1
− h12 ∂∂rx4

∂r r2 6 h (+x0 − 6x1 + 3x2 + 2x3 )
2 4
ξ
∂x 1
− h4 ∂∂rx4

∂r r3 6 h (−2x0 + 9x1 − 18x2 + 11x3 )
ξ
∂2x 1 2
∂4x h3 ∂ 5 x
∂r 2 6h2 (+12x0 − 30x1 + 24x2 − 6x3 ) − 11h
10 ∂r 4 − 5
ξ1 123 ∂r5 ξ2

r0
∂2x 1 2 4

∂r 2 6h2 (+6x0 − 12x1 + 6x2 ) − 11h


12
∂ x h ∂ x
∂r 4 − 30 ∂r 5
r1 ξ1 ξ2
∂2x 1 2
∂4x h3 ∂ 5 x
∂r 2 6h2 (+6x1 − 12x2 + 6x3 ) − 11h
12 ∂r 4 + 5
ξ1 303 ∂r5 ξ2

r2
∂2x 1 2 4

∂r 2 6h2 (+12x0 − 30x1 + 24x2 − 6x3 ) − 11h


12
∂ x h ∂ x
∂r 4 + 10 ∂r 5
r3 ξ1 ξ2

The analysis an be extended to more points thereby in reasing the a ura y


with whi h the derivation is approximated though with the ost of in reasing
omplexity of the expressions, though most ommonly the 3 point approxima-
tions are being used.

3.1.1 Extension to Higher-Dimensional Problems


This extension is straightforward as the approximation is done in ea h dire tion
separately. However, the resulting equations an be written more on isely using
the fa t that the entral grid points appear in several approximations. For a two
dimensional problem using three point approximations one an ni ely visualise
this fa t Figure 3.1.

1 1 2
1 1 2 2
-4 -4
-4
1 1

Figure 3.1: Finite dieren e approximation. Left: internal point,


entre: zero ux boundary point, right: orner  interse tion in-
terse tion of two zero ux boundaries.

The s hemes have many variations primarily be ause of mat hing the re tan-
gular grid to the geometry of the problem. The grid does also not need to be
onstant or re tangular, parti ularly in omputational uid me hani s, the grid
is being adapted to mat h the hanging stream lines and in some appli ations,
the use of triangular grids is of advantage.
Chapter 4

System Theory's {A,B,C,D}

Control and system theory builds heavily on a system representation that ame
to the lime-light in the 1950ties when system theory evolved establishing itself
in the de ade thereafter. The developments around Kalman ulminated in the
elebrated paper Kalman (1963), whi h sets a lot of the overall framework. In-
teresting is also Zadeh's paper Zadeh (1962) that ree ts the shift in philosophy
from ir uits to systems. Not at least, the developments involved a dis ussion
of the term state, as for most appli ations an input/ouput representation is suf-
 ient and the whole of the new theory was onstru ted around the state. The
ore of the des ription is a state-spa e representation of linear, time-invariant
systems. Books were written on the subje t, for example Kailath (1980) and
Chen (1984). Today, linear, time-invariant systems are a ore part of ontrol.
Whilst most systems are non-linear, the understanding of linear systems is essen-
tial and provides important insights and onsequently guidelines for nonlinear
systems.

4.1 Time-Domain Representation

4.1.1 The Standard Representation


The standard representation of linear, time-invariant (LTI) system in the time
domain is dened by two matrix equations, one for the dynami s and one for
the stati state-output relation both being hara terised by four matri es:

ẋ := Ax+ Bu, (4.1)


y := Cx + Du. (4.2)

The attribute linearity applies to the state x, the input u on the right-hand-side
and the time derivative of the state and the output y on the left-hand-side. The
term time-invariant states that the four matri es A, B, C, D are not a fun tion
of time.

71
72 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

4.1.1.1 Time-Domain Solution

The solution of the LTI systems with D := 0 is:


Z t
x(t) := Φ(t) x(0) + Φ(t − τ ) B u(τ ) dτ . (4.3)
0

The rst part of the solution ree ts the impa t of the initial onditions, whilst
the se ond part, the integral, is the onvolution of the input with the impulse
response being the matrix Φ whi h is alled the fundamental matrix. It is the
exponential of the system matrix A with the respe tive time argument:

Φ(t) := eA t , (4.4)
Λt
:= Ve V −1
. (4.5)

With V, Λ being the eigenve tor and the eigenvalue matrix, respe tively. The
exponential of the eigenvalue matrix is dened as being the diagonal matrix of
the exponential of the eigenvalues and the time argument and zero otherwise.

4.1.1.2 Sampled LTI-System

With the omputers being time-dis rete units, the sampled data systems have
ome essential for operations of a plant. The most ommon version assumes
that the sampling is instantaneous. The sampler represents the analogue/digital
unit in the pro ess interfa e and the dis rete input is onne ted to a zero-order-
hold unit representing the digital/analogue onversion unit. In most ases, the
sampling rate is onstant and behaviour of the sampled data system an be
readily omputed from the solution to the ontinuous system by integrating
re ursively over a time interval for a given input:
Z t+∆t
x(t + ∆t) := Φ(∆t) x(t) + Φ(t − τ ) B u(τ ) dτ . (4.6)
t

Using the notation of k for t = k ∆t and with u(τ ) := u(k) thus being held on-
stant by a zero-order-hold (ZOH) element during ea h time period, the integral
an be simplied:
Z ∆t
x(k + 1) := Φ(∆t) x(k) + Φ(τ ) dτ B u(k) , (4.7)
0
Z ∆t !
:= Φ(∆t) x(k) + Φ(τ ) dτ B u(k) , (4.8)
0

:= Φ(∆t) x(k) + Γ u(k) . (4.9)

If A is non-singular, then Γ an be al ulated readily :


 
Γ(∆t) := A− eA ∆t − I B , (4.10)
A−1 Φ(∆t) − I B . (4.11)

:=
4.1. TIME-DOMAIN REPRESENTATION 73

Also the ase of a singular matrix A is not di ult to treat. A variable trans-
formation on the integral yields
Z tk+1 Z ∆t

eA (tk −τ ) dτ := V eΛ τ V−1 dτ ′ , (4.12)
tk 0
Z ∆t

:= V eΛ τ dτ ′ V−1 , (4.13)
0 
R ∆t
0
eλi τ dτ ; λi 6= 0  −1
:= V   V , (4.14)

R ∆t
0 1dτ ; λi = 0
  ∀i
1
∆t
λi eλi τ 0 ; λi 6= 0  −1
:= V   V , (4.15)

∆t
τ |0 ; λi = 0
∀i
 
1 λi ∆t

λi e − 1 ; λi 6= 0  −1
:= V   V . (4.16)

∆t ; λi = 0
∀i

Finally, the output is simply:

y(k) := C x(k) + D u(k) . (4.17)

4.1.1.3 Dis rete System Representation Using Shift Operators

The use of operators is often onvenient be ause it allows ondensing the equa-
tions and thus redu es writing at the same time providing a better overview of
the problem being treated. Operator al ulus is widely used in manipulating
dierential and dieren e equations with onstant oe ients. For dierential
operations, a dierential operator is dened1 . For dieren e equations a shift
operator is dened.
Let {f (k) | k := . . . , −1, 0, 1, . . . } be a two-sided, innite sequen e of data points
representing the dis rete signal, the forward shift operator, q , is then dened
by :
q f (k) := f (k + 1) (4.18)
and the ba kward shift operator, q −1 , is dened by the relation :

q −1 f (k) := f (k − 1) (4.19)

The norm of the two operators is one, that is

||q|| = ||q −1 || := 1 (4.20)

Note : The shift operators are bounded, in fa t their norm is 1. This is not
the ase for their ounterpart the dierential operator. It is unbounded. This is
one of the reasons why dieren e al ulus is simpler than dierential al ulus.
1 Frequently used symbols are D, d and p.
74 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

Forward shift operators are onvenient when dis ussing stability and order of a
system. The ba kward shift operator is handy in problems related to ausality.
The manipulation of dieren e equations is illustrated on the following example :

y(k + n) + a1 y(k + n − 1) + a2 y(k + n − 2) + · · · + an y(k) =


b0 u(k + m) + b1 u(k + m − 1) + b2 u(k + m − 2) + · · · + bm u(k)(4.21)

Whi h with the forward shift operator writes :

(q n + a1 q n−1 + a2 q n−2 + · · · + an ) y(k) = (4.22)


(q m
+ b1 q m−1
+ b2 q m−2
+ · · · + bm ) u(k) (4.23)

Dening the two polynomials in the shift operator :

A(q) := q n + a1 q n−1 + a2 q n−2 + · · · + an (4.24)


B(q) := q m + b1 q m−1 + b2 q m−2 + · · · + bm (4.25)

The dieren e equation an be written very ompa tly :

A(q) y(k) = B(q) u(k) (4.26)

Note : The shift operator al ulus, as it is introdu ed in this se tion, does


not handle non-zero initial onditions. Additional terms must be introdu ed to
also in lude non-zero initial onditions.
The shift operator representation an now be used for solving dieren e equa-
tions. For example the following simple rst-order dieren e equation :

y(k + 1) − a y(k) = b u(k) (4.27)


(q − a) y(k) = b u(k) (4.28)
b
y(k) = u(k) or (4.29)
(q − a)
b q −1
= u(k) (4.30)
(1 − a q −1 )

Sin e ||q −1 || = 1 this result an be expanded in a onvergent series :

y(k) = b q −1 (1 + a q −1 + a2 q −2 + . . . ) u(k) (4.31)


X ∞
= b ai−1 u(k − i) (4.32)
i:=1

Compare this result with the general solution :


k−1
X
y(k) = C Φk x(0) + C Φk−j−1 Γ u(j) (4.33)
j:=0
k
X
= C Φk x(0) + C Φi−1 Γ u(k − i) (4.34)
i:=1
4.1. TIME-DOMAIN REPRESENTATION 75

with

C := 1 (4.35)
Φ := a (4.36)
Γ := b (4.37)
k
X
y(k) = ak x(0) + ai−1 b u(k − i) (4.38)
i:=1

Note : The shift operator plays the same role in the dieren e al ulus
as does the dierential operator in the dierential al ulus. It must therefore
be well distinguished from the z operator, whi h is used in the so- alled z-
transformation. The z-transformation is the analogue to the s-transformation.

4.1.2 Kalman's De omposition


In his paper Kalman (1963) shows that every ABCD representation an be sub-
divided into four subsystems: a ontrollable and observable (CO), a ontrollable
but not observable (C), an observable and not ontrollable (O) and a part that
is not ontrollable and not observable (N). The de omposition an be a hieved
through a similarity transformation as it was dis ussed in (Se tion 2.5.3).

CO

u y
C O

Figure 4.1: Kalman's de omposition into CO: ontrollable and ob-


servable, C: ontrollable only, O: observable only, N: not ontrol-
lable and not observable subsystems.

The ontrollability matrix is a bla k/white argument for a system to be on-


trollable. It is dened as:

B, A B, . . . , An B . (4.39)
 
Kc :=

where n is the dimension of the state ve tor.


76 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

The observability matrix is the dual of the ontrollability matrix:


 
 C 
 
 CA 
 
Ko :=  
.
 (4.40)
 ... 
 
 
C An

The system ABCD is said to be ontrollable, if the matrix Kc is of rank n, and


it is observable if Ko is of rank n.

4.2 Getting the LTI System from the Me hanis-


ti Model
With the me hanisti representation being nonlinear, the basi required opera-
tion for getting an LTI representation is linearization. This in itself is relatively
simple. Di ulties in general arise from the view one takes on the plant and
onsequently the representation one starts with.
Most ommonly people start with a representation that we termed a text-book
representation Se tion 2.5.2. The motivation is usually straightforward in that
it is ommonly not the onserved quantities that are of interest, but some of the
intensive quantities su h as temperature and omposition that are of interest.
Alternatively one argues the hoi e on what an be measured. Thus depending
on what one takes as the state, the dynami equations will look dierent.
The se ond issue is to determine on what the ontrol variables are, namely the
u. Looking at a parti ular system from a physi al point of view, there are two
lasses of variables that one needs to onsider. The rst one is the state of
the environment of the system being looked at: The plant, whi h is being on-
trolled, an only be inuen ed by its environment. All hanges are driven by
the environment, that is, the state of the part of the environment onne ted to
the plant is driving the hange in the plant Se tion 2.8. Control is introdu ed
on top by manipulating the resistan e of some of the ows between the envi-
ronment and the plant. The key to introdu ing ontrol thus is a separation of
the world into two parts, namely the environment and the plant Figure 2.19.
Figure 4.2 shows a plant separated from its environment with whi h it om-
muni ates extensive quantities through two interfa e systems being part of the
environment. The ontroller gets in general information about the plant itself
and the interfa e, usually from the pla e where the plant is onne ted. The two
interfa e systems are introdu ed to make the two onne tion points expli itly
visible. The further onne tion of these two systems within the environment is
not relevant for the dis ussion.
Looking at the plant P it is driven by the environment, that is, the state of
the environment determines the dire tion of the move of the plant and also its
limitations. The ontroller an only onstrain the ow, thus limit the ee t
4.2. GETTING THE LTI SYSTEM FROM THE MECHANISTIC MODEL 77

ys

yP
C

yE
u

U
x̂U|P
E P

R
x̂R|P

Figure 4.2: The plant and its environment ex hanging extensive


quantity through a onne tion that an be manipulated and a se ond
that annot be manipulated.

of the available driving for e, being the dieren e between the two onne ting
points U and P . The onsequen e of this analysis is that there exist two dierent
lasses of variables that ae t the movement of the state of the plant, whi h is
the state of the onne ted systems in the environment providing the potential
driving for e and the se ond one being the resistan e of the valve manipulated
by the ontroller. It should be noted, that only one of the two is a tually
manipulated, so one ould onsider the state of the environment omponents
to enter as disturban es or loads, depending on the view point one takes. For
the reason of lifting out the dieren e, we shall just do the last and label the
resistan e
Assuming some limited onne tivity and only rea tions taking pla e in P , the
model ould then read like:

ẋU := x̂U|E − x̂U|P , (4.41)


ẋR := x̂R|E − x̂R|P , (4.42)
ẋP := x̂U|P + x̂R|P + SP x̃P . (4.43)

With dU := LU yU and dR := LR yR

x̂U|P := x̂U|P (dU , LP yP , ΘU|P , uU|P ) , (4.44)


x̂R|P := x̂R|P (dR , LP yP , ΘR|P ) , (4.45)

the produ tion rate:

x̃P := x̃P (yP ) , (4.46)


78 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

Adding the variable transformations:

0 := si (yi , xi ) i ∈ {U, R, P } , (4.47)

ompletes the model. The two matrixes LU and LR sele t a sub-ve tor. The
ontrol a tion u is most onveniently normed between [0, 1] with the hara -
teristi s of the onne tion being aptured in the parameter ve tor Θ for the
respe tive onne tion.
Aiming at an ABCD des ription for the plant, we need to linearised around a
steady state point.
Substitution yields:

ẋP := x̂U|P (dU , LP yP , ΘU|P , uU|P ) + x̂R|P (dR , LP yP , ΘR|P ) +


+SP x̃P . (4.48)

Indi ating with a ∗ steady state onditions, the linearization yields:


! !
∂x̂U|P ∂x̂R|P ∂x̃R|P ∂yP
∆ẋP := + + SP ∆xP +
∂yP ∂yP ∂yP ∂xP

∂x̂U|P ∂x̂R|P ∂x̂U|P
     
+ ∆dU + ∆dR + ∆u (4.49)
.
∂dU ∗ ∂dR ∗ ∂u ∗

Assuming some mild regularity of the variable transformations, one gets:


!−1
∂yP ∂s ∂s
:= . (4.50)
∂xP ∂yP ∂xP

4.3 Frequen y Domain Representation


Chara terising the ability of a model to des ribe a system is intimately related
to what is being ompared. Most ommonly, a omparison is based on in-
put/output responses, that is, one applies a disturban e to the plant as well as
to the model and ompares the two experimental results. A measure must then
be introdu ed to provide a riterion based on whi h the quality of the model
an be judged. At least intuitively it is lear that the results obtained from
su h experiments do not only depend on the model itself, but also on the type
of input signal that is applied to the plant and the model. It is also intuitively
lear that it is not only the magnitude of the signal, but also the dynami s of the
signal ae ting the result. Thus the idea ame about to use pure dynami s,
namely periodi fun tions, sinusoidals, to be spe i .
This thinking gave raise to the frequen y domain representation, the Lapla e
domain representation.
The transformation of an ABCD representation is straightforward:

s x(s) − x(0) = A x(s) + B u(s) , (4.51)


y := C x(s) + D u(s) . (4.52)
4.3. FREQUENCY DOMAIN REPRESENTATION 79

Isolating x yields:
−1
(4.53)

x(s) = sI − A x(0) + B u(s) ,
y := C x(s) + D u(s) . (4.54)
and
−1
(4.55)

y(s) = C sI − A x(0) + B u(s) + D u(s) ,
−1  −1 
:= C s I − A x(0) + C s I − A B + D u(s) . (4.56)

The solution in the Lapla e domain has two omponents. The rst is
−1
C sI − A x(0),
whi h ree ts the ee t of the initial onditions over time, whilst the se ond
part, namely  
−1
C sI− A B + D u(s)
represents the ee t of the inputs.
The matrix
 −1 
G(s) := C sI− A B+D , (4.57)

is alled the transfer fun tion matrix and represents the pure input/output
behaviour of the modelled plant.
To understand the transfer fun tion matrix better, it is illustrative to rewrite
the inverse as the fra tion of the adjoint and the determinant:
!
adj s I − A
G(s) := C  B + D. (4.58)
| sI − A |

The determinant | s I − A | is the hara teristi polynomial of the matrix A




and thus the roots are the eigenvalues of A. The matrix adj s I − A is a


matrix of polynomials in the Lapla e variable s.


Be ause it des ribes the transfer of the input to the output, the transfer fun tion
matrix G(s) is maps the inputs, u(s), into the outputs, y(s):
y(s) := G(s) u(s) (4.59)

Ea h of the individual transfer fun tions in the transfer fun tion matrix an be
written a ratio of two polynomials in s, whi h is being derived in Se tion 4.3.2:
 
Bi,j (s)
G(s) := . (4.60)
A(s)
The numerator polynomial Bi,j (s) varies with i, j , whilst the enumerator poly-
nomial is always the same. The dynami properties are thus hara terised by
the roots of the Bi,j (s) polynomial, the zeros and the roots of the hara teristi
polynomial A(s), thus the eigenvalues of the matrix A, the poles. It is the poles
that determine stability. The system is stable if all the poles are in the left-half
plane Chapter 5.
80 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

4.3.1 Transfer Fun tions Are Complex


Sin e the Lapla e variable s is a omplex number, g(s) is thus a omplex fun tion
and an therefore be represented in the form

g(s) = |g(s)| eiϕ (4.61)

where |g(s)| the absolute value of g(s) and ϕ the phase angle. For n signal
operation blo ks in series as indi ated in the gure below the overall transfer

g1 g2 g3 gn

Figure 4.3: Transfer fun tions in series

fun tion is related to the transfer fun tions of the individual blo ks by

g(s) = gn (s) gn−1 (s) . . . g1 (s) (4.62)

thus
g(s) = |gn (s)| |gn−1 (s)| . . . |g1 (s)| ei(ϕn +ϕn−1 + ... +ϕ1 ) (4.63)

The amplitude of the overall transfer fun tion is therefore

|g(s)| := |g1 (s)| |g2 (s)| . . . |gn (s)| (4.64)

and the logarithm of the amplitude al ulates


n
X
log|g(s)| := log|gi (s)| (4.65)
i=1

The phase of the overall transfer fun tion is


n
X
ϕ(s) := ϕi (s) (4.66)
i=1

4.3.2 Polynomial Transfer Fun tions


To understand the transfer fun tion matrix (Equation (4.58)) better, it is illus-
trative to rewrite the inverse as the fra tion of the adjoint and the determinant:
!
adj s I − A
G(s) := C  B + D. (4.67)
| sI− A |

The determinant | s I − A | is the hara teristi polynomial of the matrix A




and thus the roots are the eigenvalues of A, whi h are also alled the poles of
the respe tive transfer fun tion.
4.3. FREQUENCY DOMAIN REPRESENTATION 81

The adjoint adj(I s − A) is a matrix of polynomials in the Lapla e variable s.


This an be seen by analysing the denition of the adjoint matrix2 . Let rij be
the i/th ofa tor of the matrix Q dened by

rij := ofij (Q) := (−1)i+j |Q | (4.73)


ij

with |Q | being the determinant of the ij -minor obtained by deleting the ith
ij
row and the j th olumn of the matrix Q. Determinants an be re ursively
expanded into weighted sums of sub-determinants. With Q := |I s − A|, the
ofa tors be ome polynomials in s. The individual transfer fun tions in G is a
ratio of two polynomials, namely

∀r cil ofrl (I s − A) brj


P P
∀l
gij (s) := (4.74)
|I s − A|
Bij (s)
:= (4.75)
A(s)
where Bij (s) and A(s) are s alar polynomials. The notation for two polynomials
ree ts their origin. The A(s) is asso iated with the A matrix, whilst the B(s)
polynomial is asso iated with the B matrix.

For the oming analysis, let the transfer g(s) := k B(s)


A(s) where A(s) and B(s) are
the s alar moni polynomials of the numerator and the denominator and k is
3

the gain4 . It is assumed that both polynomials are of the form :


X
P (s) := ai si ; an := 1 (moni polynomial) (4.76)
i :=0,...,n

Polynomials have any ombination of


- zero roots
- real roots (ri )
- onjugate omplex roots (ei , e∗i )
and a onstant gain k . Thus in general the polynomial takes the form
Y Y nj
P (s) := k sl (s − ri )mi (s − ej ) (s − e∗j ) (4.77)
i j

2 The following relations hold

A A−1 = A−1 A−1 = I (4.68)

A adj(A) = adj(A) A = |A| I (4.69)

thus

adj(A) A = |A| A−1 A (4.70)

adj(A) = |A| A−1 (4.71)

adj(A)
A−1 = (4.72)
|A|
3 moni polynomial :: a polynomial with the leading (highest power) oe ient being 1
4 Note: k is not the steady state gain, but the gain whereby the rest of the transfer fun tion
has a gain of 1. For stable pro esses, the gain is identi al to the steady state gain. (see also
se tion Se tion 4.3.2.3)
82 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

The indexes i, and j run over the set of dierent roots, whilst mi indi ate the
number of equal real roots, nj the number of equal onjugate- omplex pairs of
roots and l the number of roots that are zero.
The roots of the numerator are alled zeros. The term zero ree ts the fa t
that an input with the frequen y of the zero is ompletely absorbed by the
system. The roots of the denominator polynomial are the poles, whi h are the
frequen ies where the transfer fun tion approa hes innity and thus show a pole
in the graphi al representation of the transfer fun tion.
Polynomials in s are the most ommon form of transfer fun tions. They are the
Lapla e transform of SISO systems des ribed by ordinary dierential equations
in the time domain. Many systems t into this lass as be omes apparent when
modelling physi al- hemi al systems.

4.3.2.1 Transfer Fun tions of Transportation Lags : Dead-Time El-


ements

Another important lass of transfer fun tions derives from models des ribing
transfer lags, that is, dierential equations, whi h in lude terms of the type

x(t − τd ) (4.78)

with τd being the dead time. The Lapla e transform of this term is

e−τd s x(s) (4.79)

where e−τd s is the transfer fun tion of the transportation lag.

4.3.2.1.1 Example: Transportation Lag

A simple model of a ow through a pipe is a plug ow assuming that a at front
moves through the pipe. This model negle ts any fri tion ee ts both, fri tion
of the uid on the wall and uid-internal fri tion. The model for the pipe is
then simply

y(t) := u(t − τd ) (4.80)

where τd is the length of the pipe divided by the stream velo ity.

4.3.2.2 Graphi al Representation of Transfer Fun tions

It is often easier to assess the ontents of data qui ker and more omprehensive
from a graphi al representation. This also applies to transfer fun tions. Sin e
these are omplex fun tionals, one an either represent the magnitude and the
argument of the omplex number in a separate plot, whi h results essentially
in the Bode plot or one uses a polar representation, whi h is the Nyquist plot.
These are the most ommonly used two representations. Only that the Bode plot
uses a log-log representation for the amplitude and a semi-log representation for
4.3. FREQUENCY DOMAIN REPRESENTATION 83

the phase. The hoi e of s ales supports the fa torisation of transfer fun tions
as it is dis ussed in the se tion Transfer Fun tions Are Complex.
In the ase of a single-input-single-output (SISO) system with the transfer fun -
tion g(s) the amplitude ratio and the phase are given by:

amplitude ratio (4.81)


p
|g(s)| := [ℜ(g(s)]2 + [ℑ(g(s)]2
 
ℑ(g(s))
phase shift ϕ(s) = arg(g(s)) = arctan (4.82)
ℜ(g(s))

ℜ(·) real part


::
ℑ(·) :: imaginary part
arg(·) :: argument
s :: Lapla e variable
:= i ω

i := −1
ω :: frequen y

The Bode plots are the two plots of a SISO:


Amplitude plot log |g(iω)| vs. log(ω)
Phase plot ϕ vs. log(ω)
The Nyquist plot is the polar plot ℑ(g(s)) vs. ℜ(g(s)). The distan e from the
origin to any point of [ℑ(g(s)) , ℜ(g(s))] is one of the o-ordinates, the other
one is the angle ϕ.

4.3.2.3 Approximation of Polynomial Transfer Fun tions

omponents s small s large interse tion gure

|go | := |K| |K| |K| − 4.4

|gl | := |sl | = sl sl sl − 4.5

|gi | := |τi s + 1| 1 |τi | s s× := 1


|τi | 4.6, 4.7

4.8
1 1 1
|gj | := ( −e s + 1) ( s + 1) 1 s2 s× := |e|

−e ∗ |e|2
j j

Table 4.1: Computation of the approximate Amplitude Plot for dierent ele-
mentary transfer fun tions

It is often handy to get a qui k impression of the overall behaviour of a system.


Sket hing the Bode plots is one of the ways one gets insight into the system's
behaviour in the frequen y domain. For polynomial transfer fun tions, whi h
84 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

root(s) begin hange

no root : K >0 0o −

no root : K <0 −180o −

zero root : ri := 0 +90o −

real root : ri ∈ R 0o −sign (ri ) 90o

onjugate pairs of omplex roots : ej , e∗j ∈ C 0o −sign (ℜ(ri )) 180o

Table 4.2: Computation of the approximate Phase Plot for dierent elementary
transfer fun tions

represent the most ommon lass of system models, approximations an be


onstru ted easily. For this purpose equation 4.77 is re ast into the following
form :
Y Y 1 Y
P (s) := k sl (−ri )ni ( s + 1)mi |ej |2 nj
i i
−r i j
!nj
Y 1 1
( s + 1) ( ∗ s + 1) (4.83)
j
−e j −e j
  !
Y Y Y 1
n 2 n l m

:= k (−ri ) i |ej | j  s ( s + 1) i
i j i
−ri
 !nj 
Y 1 1
 ( s + 1) ( ∗ s + 1)  (4.84)
j
−e j −e j
!nj
Y Y 1 1
:= K sl (τi s + 1)mi ( s + 1) ( ∗ s + 1) (4.85)
i j
−ej −ej
!  
Y Y n
mi
:= go gl (s) gi (s)  gj j (s) (4.86)
i j

The rules appli able to omplex numbers Equation 4.65 and 4.66 des ribe how
omposite transfer fun tions are onstru ted from the individual fa tors. It is
thus su ient to provide the approximations for the basi omponents, whi h
are onstru ted from asymptotes. The onstru tion of the asymptotes for the
basi transfer fun tions are shown in the two tables 4.1 for the magnitude plot
and 4.2 for the phase plots. Example plots are given below, whi h are also
referen ed in the table 4.1. Noti e that the gain K is the steady-state gain,
if the transfer fun tion has no zero poles. For the phase of a onstant transfer
fun tion, the onvention is used as indi ated before. It is onstant, either at 0
or −180 depending on the sign of steady-state gain. The phase for zero-root
transfer fun tions is also a onstant (or a step that o urs at negative innity).
4.3. FREQUENCY DOMAIN REPRESENTATION 85

Whilst the other primitive transfer fun tions, the simplest approximation for
the other basi transfer fun tions phase is a step. The step o urs at the orner
frequen y, whi h is the interse tion of the low-frequen y and the high-frequen y
asymptotes in the amplitude plot. The sign of the root determines the dire tion
of the step.

4.3.2.3.1 A Re ipe Approa h to Visualise Approximations of Bode


Plots

The graphi al representation of rational transfer fun tions is easily a hieved if


one follows the steps deriving from the above. Given a rational transfer fun tion
as a ratio of polynomials

• Make polynomials moni : fa tor su h that the leading oe ient is 1, with
the leading oe ient being the one of the highest order term.

• Find the roots of the two polynomials and write polynomials in produ t
form.

• Fa torise into primitive polynomials: onstant, zero roots, real roots, pairs
of omplex and onjugate omplex roots.

• Sket h ea h primitive in the Bode plot using the asymptotes as tabled


above.

• Combine to nd the overall transfer fun tion.

4.3.2.3.2 Transfer fun tion of elementary transfer fun tions

No root, gain only :

Steady-state systems have no dynami s, thus the polynomial of the transfer


fun tion is of zeroth order and thus have no roots. In su h a system the output
is linked through a fa tor with the input. In the simplest ase this fa tor is a
onstant, whi h makes the system a linear system:

y(s) = k u(s) (4.87)


g(s) := k (4.88)

The amplitude of the transfer fun tion |k| shows as a horizontal line in the Bode
plots (Figure 4.4) as k := 2.
For the phase, the sign of the fa tor, also alled the gain or the steady-state
gain, determines the value of the onstant phase. For a realisable system, the
phase is 0 for k > 0 and −180o for k < 0, by onvention.
One Zero Root : The two ases s and 1/s are important. Both appear
frequently as omponents in polynomial transfer fun tions. The rst is a dif-
ferential operation, an unbounded operation, and the se ond a pure integrator.
Pure dierential operators are not realisable, whilst integrators are ommon
elements on nds in plants. For example a tank with inlets and outlets is an
integrator in terms of mass. The Bode plot of the is shown in Figure Figure 4.5.
86 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

num : [ 2 ] den : [ 1 ] zer : [ ] pol : [ ]


0.35

Amplitude Ratio :: log10|g|


0.3

0.25

0.2

0.15

0.1

0.05

0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

1.5

0.5
Phase [Pi]

−0.5

−1

−1.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.4: No root

num : [ 1 0 ] den : [ 1 ] zer : [ 0 ] pol : [ ]


2
Amplitude Ratio :: log10|g|

−1

−2
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

0.5
Phase [Pi]

−0.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.5: Zero root in numerator


4.3. FREQUENCY DOMAIN REPRESENTATION 87

num : [ 1 1 ] den : [ 1 ] zer : [ −1 ] pol : [ ]


2.5
Amplitude Ratio :: log10|g|

1.5

0.5

0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

0.5
Phase [Pi]

−0.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.6: One negative real root

num : [ −1 1 ] den : [ 1 ] zer : [ 1 ] pol : [ ]


2.5
Amplitude Ratio :: log10|g|

1.5

0.5

0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

0.5

0
Phase [Pi]

−0.5

−1
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.7: One positive real root


88 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

One Real Root Only : The simplest version of rst-order systems has one
root only. For example, a SISO rst-order system of this type is:
ẋ = −1/τ (x + u) (4.89)
has the amplitude and the phase
1
|g(s)| = √ (4.90)
τ 2 ω2 + 1
ϕ = tg −1 (−τ ω) (4.91)

Figure Figure 4.6 shows the ase where the root is negative, here −1 and Fig-
ure 4.7 has a root of 1. Note that the amplitude is not ae ted by the sign
hange, whilst the phase is hanging sign.
Complex onjugate roots : Se ond-order systems are the simplest that may
exhibit os illatory behaviour. Os illations are hara terised by omplex roots
of the denominator polynomial. Complex roots always appear in pairs, the
onjugate omplex pairs. Sin e the os illating behaviour is of so mu h interest,
it is ommon pra ti e to parameterise se ond-order system in a spe ial way,
whi h is :
1
g(s) := 2ξ 1
(4.92)
1+ ωn s+ ωn2 s2

ξ is the damping fa tor and ωn is the riti al frequen y. For the damping fa tor
in the range of 0 < ξ < 1 the roots are omplex. Outside this interval, the
roots are real. The latter ase an be redu ed to the produ t of two rst-order
systems with real roots, a ase, whi h was dis ussed above. Figure 4.8 shows
the Bode plots for a frequen y normalised with the riti al frequen y.
Pure delay - a dead-time of 1 : The transfer fun tion of the dead time
element was introdu ed in Equation 4.79. The Bode plot is shown in Figure 4.9
Approximate Bode Plot of Composite System :

This example demonstrates how an approximation of a relatively omplex trans-


fer fun tion an be obtained qui kly. The transfer fun tion is :
s2 + 11 s + 10
g(s) := (4.93)
50 s3 + 15 s2 + s
Firstly this expression is modied to obtain the form of Equation 4.76, namely
the ratio of two moni polynomials :
1 s2 + 11 s + 10
g(s) := (4.94)
50 1 s3 + 3/10 s2 + 1/50s
Next the roots of the two polynomials are omputed. The roots of the numer-
ator polynomial are {−10, −1}, the zeros, and the roots of the denominator
polynomial are {0, −0.2, −0.1}. The roots are now used for fa torising the two
polynomials :
1 (s + 10) (s + 1)
g(s) := (4.95)
50 s (s + 0.2) (s + 0.1)
4.3. FREQUENCY DOMAIN REPRESENTATION 89

num : [ 1 −2 2 ] den : [ 1 ] zer : [ 1+1i 1−1i ] pol : [ ]


Amplitude Ratio :: log10|g| 5

0
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

0.5

0
Phase [Pi]

−0.5

−1

−1.5
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.8: A pair of onjugate omplex roots

pure delay of 1
1
Amplitude Ratio :: log10|g|

0.5

−0.5

−1
−1 0 1
10 10 10
Frequency :: log10(ω) [rad/s]

−100

−200
Phase [Pi]

−300

−400

−500

−600
−1 0 1
10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.9: A pure delay, a deadtime of 1


90 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}

whi h are rewritten in the standard form :

1 10 (1/10 s + 1) (s + 1)
g(s) := (4.96)
50 0.2 0.1 s (1/0.2 s + 1) (1/0.1 s + 1)
(1/10 s + 1) (s + 1)
:= 10 (4.97)
s (1/0.2 s + 1) (1/0.1 s + 1)

Rewriting this expression again, lets the omponents stand out learly :

g(s) := 10 (1/10 s + 1) (s + 1) s−1 (1/0.2 s + 1)−1 (1/0.1 s + 1)−1 (4.98)


:= go g1 g2 g3 g4 g5 g6 (4.99)

Adding the following rules as additional ingredients, the sket hing of the ap-
proximate Bode plots is done in a jiy (see Figure Figure 4.10).

|g(s)−1 | = |g(s)|−1 (4.100)


arg g(s)−1 (4.101)

= − arg (g(s))

num : [ 1 11 10 ] den : [ 50 15 1 0 ] zer : [ −10 −1 ] pol : [ 0 −0.2 −0.1 ]


3
Amplitude Ratio :: log10|g|

−1

−2

−3

−4
−2 −1 0 1 2
10 10 10 10 10
Frequency :: log10(ω) [rad/s]

0.5

0
Phase [Pi]

−0.5

−1

−1.5

−2
−2 −1 0 1 2
10 10 10 10 10
mm Frequency :: log10(ω) [rad/s]

Figure 4.10: A pure delay, a deadtime of 1

4.3.2.3.3 De ibels

De ibels are often used as units in the amplitude plots. The amplitude a in
de ibel in terms of the amplitude A is dened by

a = 20 log10 (A) (4.102)


4.3. FREQUENCY DOMAIN REPRESENTATION 91

4.3.2.3.4 Non-minimal phase systems

Systems with all poles and zeros in the left half plane are minimal phase systems,
whereas systems with poles or zeros in the right half plane are non-minimal phase
systems.
92 CHAPTER 4. SYSTEM THEORY'S {A,B,C,D}
Chapter 5

Stability

Stability is a quite obvious property required for a dynami system. Controllers


are thus designed based on the overall stability of the system. After all, the
system shall remain in a ertain restri ted domain and not just walk o into
nowhere.

5.1 The Con ept of Stability


Stability is an important property of a dynami system. Stability is almost
always a requirement one pla es on a ontrolled system. For unstable plants
ontrollers are thus designed to stabilise the plant and for stable plants, the
ontroller is expe ted not to destabilise the plant neither for hanges of the
setpoint nor for any type of disturban e the system may experien e. Sin e
stability is su h a fundamental property of systems, it is not surprising that
stability theory started very early in the development of system theory. It is
indeed a very well developed bran h in the study of dynami systems. The rst
investigations have been done as early as 1877 by Routh followed by Hurwitz
1880 and the very general stability theory of Liapunov was published 1893. It
forms still the foundation of stability theory. Major advan es have been made
by Bode and Nyquist in the Thirties of this Century. The Liapunov theory

Figure 5.1: Global stability - a me hani al example

has been abandoned for de ades. Sin e the Fifties, however, more and more
appli ations appeared taking advantage of the very general nature of the theory

93
94 CHAPTER 5. STABILITY

whi h makes it also appli able to nonlinear system while most other te hniques
are only appli able to linear systems.
The on ept of stability an be ni ely illustrated by a well known physi al
system : a ball in the gravity eld. Three dierent equilibrium positions an be
identied for this system (Figure Figure 5.1). A slight mispla ement of the ball
will result in
1. os illation around the equilibrium state
2. no hange in the equilibrium state
3. divergen e, the ball drops down away from its urrent state of equilibrium.
Thus the rst two states are stable, where the last one is not.

Figure 5.2: Lo al stability - a me hani al example

Extending the model further by in luding fri tion, the ball in the pot displa ed
from its equilibrium position returns to the equilibrium position either by a
damped os illation or no os illation at all. Su h a system is alled asymptoti ally
stable.
Stability is in general a lo al property whi h therefore must be investigated over
the whole range of appli ation. In terms of our physi al system, a mountain
with a dip at the top is an example for a lo ally stable system. The lass
of linear systems is an important ex eption. For them, lo al stability always
implies global stability too as shall be shown later.

Two stability on epts are of interest :


- those related to the transient behaviour when the system is repla ed away
from its equilibrium position, and
- those related to input-output stability.

Denition - stable : The equilibrium state xe is alled stable if for


any given t0 and positive ε, there exists a δ(ε, t0 ) su h that1
||x0 − xe || < δ ⇒ ||x(t; x0 , t0 ) − xe || < ε ; ∀ t ≥ t0 (5.1)

Denition - onvergent or quasi-asymptoti ally stable : The


equilibrium state xe is alled onvergent (or quasi- asymptoti ally sta-
ble) if for any t0 there exists a δ1 (t0 ) su h that
||x0 − xe || < δ1 ⇒ lim x(t; x0 , t0 ) = xe (5.2)
t→∞
 2
1 || x|| denotes the Eu lidean norm = Pn 2
i=1 xi
5.2. THE EIGENVALUE|POLE ARGUMENT FOR LINEAR, TIME-INVARIANT SYSTEMS 95

Denition - asymptoti ally stable : The equilibrium state xe is


alled asymptoti ally stable, if it is onvergent and stable.

or more olloquial: a system is stable if slightly perturbed from its equilibrium


state all subsequent motions remain in the neighbourhood of the equilibrium
state.
Asymptoti stability is stronger than stability. It requires in addition that all
subsequent motions return to the equilibrium after a small perturbation.

Denition - globally asymptoti ally stable : The equilibrium


state xe is alled globally asymptoti ally stable if it is stable and if
every motion onverges to the equilibrium state as t → ∞.

5
1 : globally asymptoti ally stable
1 2 : globally stable
3 : stable
4 : asymptoti ally stable
5 : unstable
ǫ
xe : equilibrium point
δi : limit for initial perturbation
δi
ǫ : desired operating domain
xe
4

3
2

Figure 5.3: Dierent stability on epts

5.2 The Eigenvalue|Pole Argument for Linear,


Time-Invariant Systems
Stability of a free linear, time-invariant system an be readily shown by analysing
the solution of the model equations. If the solution does not onverge to the
equilibrium state but tends to innity in any of the o-ordinates starting at an
arbitrary point in the state spa e, the system is unstable.
96 CHAPTER 5. STABILITY

The solution to the free, linear, time-invariant system {A, B, C := I, D, u(t) :=


0 ∀ t is

x(t) := eA t x(0) (5.3)

This an be rewritten in the form (see Chapter 3):

x(t) := V eΛ t V−1 x(0) (5.4)

This solution of the unfor ed system approa hes the equilibrium state, whi h is
0, only if all Eigenvalues are less than zero. Thus the stability requirement is

ℜ(λi ) < 0 ∀i (5.5)

The proof is trivial: eλi t for any t > 0 and ℜ(λi ) > 0 in reases without limit as
t in reases. As V spans a spa e dierent from the null spa e the solution tends
to innity.
For time-invariant, linear systems, stability automati ally implies global stabil-
ity as the above is a stable system will onverge to the equilibrium point 0 for
any initial ondition x(0). This statement is proven again below using the dire t
method of Liapunov.

5.3 Dire t Method of Liapunov


If the solution of the ve tor dierential equation des ribing the behaviour of
a dynami system are known in expli it form, the stability properties an be
determined dire tly by he king the onditions in the denitions made in the
previous se tion. However in general, it is not possible to nd expli it solutions
for these equations, whi h are non-linear and time-dependent. Only numeri al
solutions might be available. Therefore, methods have been developed to derive
the stability properties of dynami systems without solving the dynami equa-
tions. The dire t or se ond method of Liapunov is a general te hnique that has
its origin in a generalised energy on ept and is hen e an extension of Lagrange's
stability riterion for me hani al systems ( f. Physi s Le tures).
Liapunov theory explores the zero solution of autonomous systems :

ẋ = f ( x) with f ( 0) = 0 (5.6)

Theorem 1. Given the system ẋ = f ( x) with f ( 0) = 0, if there exists a


s alar Liapunov fun tion v( x) whi h in a domain Ω in the neighbourhood of the
origin satises the following onditions:
1) v( x) and ∂∂vx are ontinuous
2) v( 0) = 0
3) v( x) > 0 for x 6= 0
4a) v̇( x(t)) ≤ 0 ⇒ origin is stable
4b) v̇( x(t)) < 0 ; x 6= 0 ⇒ origin is asymptoti ally stable
4 ) v̇( x(t)) < 0 and v̇( x) 6= 0 along ∀ traje tories ex ept the origin ⇒
origin is asymptoti ally stable
where v̇( x(t)) := dt
d
(x(t)).
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 97

Theorem 2. Given the system ẋ = f ( x) with f ( 0) = 0 If there exists a s alar


Liapunov fun tion v( x) whi h in a domain Ω in the neighbourhood of the origin
satises the following onditions:
1) v( x) and ∂∂vx are ontinuous
2) v( 0) = 0
3) v( x) > 0 for x 6= 0
4) v( x) → ∞ for || x|| → ∞
5) v̇( x) < 0 or
v̇( x) ≤ and v̇( x) 6= 0 along any traje tory ex ept the origin
⇒ the origin is globally asymptoti stable.

Proofs (or parts of it) an be found in textbooks on linear system theory su h


as Chen and mathemati s texts on ordinary dierential equations or spe ialised
literature su h as J.C. Willems: Stability Theory of Dynami Systems.

5.4 Stability of Linear, Continuous Systems


5.4.1 Free, or Autonomous Systems
Theorem 3. The linear system ẋ = A x is asymptoti ally stable if ℜ(λi (A)) <
0 for all eigenvalues λi .

Proof : In ase where A has n-dierent eigenvalues, the system an be trans-


formed into the diagonal anoni al form:
x̃˙ = Λ x̃ where Λ := diag[λ1 , λ2 , . . . , λn ] (5.7)
and x̃˙ i = λi x̃i (5.8)

Choose the following Liapunov fun tion:


n 
X 
v( x̃) = − 2 ℜ(λi ) x̃i x̃∗i (5.9)
i=1

where x̃∗i :: the onjugate omplex of x̃i then


n
X
v( x̃) = − (λi + λ∗i ) x̃i x̃∗i (5.10)
i=1
Xn
v̇( x̃) = − (λi + λ∗i ) (x̃˙ i x̃∗i + x̃i x̃˙ ∗i ) (5.11)
i=1

Whi h with the system equation be omes


n
X
v̇( x̃) = − (λi + λ∗i ) (λi x̃i x̃∗i + x̃i λi x̃∗i ) (5.12)
i=1
Xn
= − (λi + λ∗i )2 x̃i x̃∗i (5.13)
i=1
98 CHAPTER 5. STABILITY

sin e λi + λ∗i = 2 ℜ(λi ) and the produ t


  
x̃i x̃∗i = ℜ(x̃i ) + i ℑ(x̃i ) ℜ(x̃i ) − i ℑ(x̃i ) (5.14)
2 2
= ℜ(x̃i ) − i2 ℑ(x̃i ) (5.15)
2 2
= ℜ(x̃i ) + ℑ(x̃i ) (5.16)

Thus
n  2 " #
X 2 2
v̇( x̃) = − 2 ℜ(λi ) ℜ(x̃i ) + ℑ(x̃i ) (5.17)
i=1

Thus for any x̃i within the neighbourhood of Ω of the equilibrium state x̃e = 0,
a ording to Theorem 1 :
4a) v̇( x) ≤ 0 ⇒ origin is stable
4b) v̇( x) < 0 ; x 6= 0 ⇒ origin is asymptoti ally stable if in addition
v( x̃) > 0 whi h is the ase if ℜ(λi ) < 0 ; ∀ i the system is even globally
asymptoti ally stable be ause

v( x) → ∞ as || x|| → ∞ (5.18)

Sin e for ℜ(λi ) < 0 ; ∀i



X n  
||v( x̃)|| = − ∗
2 ℜ(λi ) x̃i x̃i (5.19)


i=1
n  
X 2 2
(5.20)

= 2 ℜ(λi ) ℜ( x̃i ) + ℑ( x̃i )
i=1

Thus for || x̃|| → ∞ ||v( x̃)|| → ∞. A ording to Theorem 2. the origin x̃e is
globally asymptoti ally stable.
On the other hand, if only one eigenvalue assumes a positive real part, v( x̃)
is indenite and sin e v̇( x̃) remains negative semidenite, the system will be
unstable.

Theorem 4. If the system ẋ = A x is asymptoti ally stable, it is always also


globally asymptoti ally stable.

Proof : Given the asymptoti ally stable system ẋ = A x disturban es in the


initial onditions ||x(0)|| ≤ k are permissible.
Transforming the state variable x = α y for α > 1 then ẏ = A y. This new
system is identi al to the original system. Therefore, disturban es in the initial
onditions of the magnitude ||y(0)|| ≤ k are permissible. However,

|| x(0)|| = α|| y(0)|| (5.21)


1
|| y(0)|| = || x(0)|| ≤ k (5.22)
α
thus || x(0)|| ≤ αk (5.23)
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 99

Choosing α su iently large, the whole domain is in luded and any initial on-
dition satises the ondition ⇒ the system is globally asymptoti ally stable.

5.4.2 Bounded Input, Bounded Output Stability (BIBO


Stability)
Denition - BIBO-stability : A SISO system is alled BIBO-
stable, if for any bounded input signal the output signal remains bounded
too.

Given the linear time- onstant SISO system

ẋ = Ax + bu ; x(0) = x0 (5.24)
y = T
c x (5.25)

the solution in the time domain is given by


Z t
y(t) = cT eAt x(0) + cT eA(t−τ ) b u(τ )dτ (5.26)
0

The rst term depends on the initial onditions only and was subje t to the
stability dis ussion in the previous se tion.
Sin e the transfer fun tion is given by

g(s) := cT (s I − A)−1 b = cT |s I − A|−1 adj(s I − A) b (5.27)

The poles are identi al with the eigenvalues of A. Therefore as system is BIBO
stable if it is asymptoti ally stable.
Note : the opposite is not ne essarily true. BIBO stability does not imply
asymptoti stability be ause BIBO stability a ounts only for the observable
and ontrollable parts of the system.
Note : the Routh-Hurwitz method, whi h is based on a ontinuous fra tion
expansion, an be used for testing the lo ation of the poles whi h are the roots
of the denominator polynomial in the transfer fun tion.

5.4.3 Time-Invariant Linear Systems


Consider the following linear, time-invariant system :

ẋ = A x (5.28)

The weighted sum-of-squares is a andidate for a Liapunov fun tion:

v ( x) = xT P x (5.29)
100 CHAPTER 5. STABILITY

where P is positive denite, i.e.



 > 0, for x 6= 0

xT P x := (5.30)
 = 0, for x = 0

The time derivative is then

v̇( x) = ẋT P x + xT P ẋ (5.31)


= T T
(A x) P x + x P (A x) (5.32)
= T
xT (A P + P A) x (5.33)

A ording to Liapunov's theorem, the system ẋ = A x is asymptoti ally stable


if v̇( x) is negative denite. Therefore, the matrix (AT P+P A) must be negative
denite or, whi h is the more ommon way of representing the result,

Q := −(AT P + P A) > 0 (5.34)

. Based on this derivation, a three step algorithm for testing a linear, time-
invariant system an be derived :
1. Choose an arbitrary symmetri matrix Q > 0 for example Q := I.
2. Cal ulate the elements of the also symmetri matrix P element by ele-
ment from the equation. Q := −(AT P + P A) > 0.
3. Test for positive deniteness of P → Silvester Theorem
Theorem 5 (Silvester). A symmetri matrix is positive denite if all its leading
major minors are > 0.

The proof is elebrated in standard linear algebra books su h as Gantma her,


Matrix Theory Vol 1.
Thus given a matrix A all of the following determinats must be > 0 :


a11 a12 ... a1n


a11 a12 a21

a22 ... a2n

D1 := |a11 |, D2 := , . . . , D :=

n . ..

..

a21 a22 . ... . . .



an1 an2 ... ann

(5.35)

5.4.4 The Routh Criterion


The riterion derived by Routh and Hurwitz independently around 1980 is a
re ursive s heme for determining the number of positive and negative roots in
a polynomial. Sin e all the roots in the denominator polynomial of the pro ess
transfer fun tion must be negative for asymptoti stability the Routh-Hurwitz
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 101

s heme an be utilised for testing stability without expli itly al ulating the
roots of the denominator polynomial.
The pro edure goes as follows :
1. Write the denominator polynomial in s in the form an sn + an−1 sn−1 +
· · · + a1 s + a0 = 0 where ai ∈ R for all i and an > 0.
2. If for any of the oe ients ai in the polynomial the ondition ai ≤ 0
holds, then at least one root is inside the right half plane; thus the system
is not stable.
3. Theorem : A polynomial has all its roots in the left half plain i all αi > 0
in the re ursive s heme shown below for the example of a polynomial of
7th order :
an
α1 := an−1 | an an−2 an−4 an−6
an−1
α2 := bn−2 | an−1 an−3 an−5 an−7

α3 := bcn−2
n−3
| bn−2 bn−4 bn−6
..
. | cn−3 bn−5 bn−7
.. ..
. .
g1
α7 := h0 | g1 0

where
an−1 an−2 − an an−3
bn−2 := (5.36)
an−1
an−1 an−4 − an an−5
bn−4 := (5.37)
an−1
..
. (5.38)
bn−2 an−3 − an−1 bn−4
cn−3 := (5.39)
bn−2
Theorem 6 (Routh). The number of roots of P (λ) in the RHP is equal to the
number of sign hanges in the se ond olumn of Routh's s heme (an , an−1 , bn−2 , cn−3 , . . . ).

Note : Only the hange of sign in this olumn is important, the al ulation of
the rst row an therefore be omitted.
Note : The Hurwitz' riterion is losely related to Routh's riterion, be ause
the elements in the so- alled Hurwitz matrix are al ulated by the same rules
as the elements in the Routh s heme.
Note : If one of the oe ients (an , an−1 , . . . ) in Routh's s heme be omes zero,
this element is repla ed by an arbitrarily small number ε. All onsequent oe-
ients are then a fun tion of ε. After the whole s heme has been al ulated, the
limits of all oe ients in the se ond row that are a fun tion of ε are examined
for a hange of sign by hanging ε → 0+ and ε → 0− . Note though, that su h
a zero also indi ates that the system has an eigenvalue on the imaginary axis,
that is on the stability boundary.
102 CHAPTER 5. STABILITY

5.4.5 Nyquist Criterion (SISO)


The Nyquist riterion is based on the omplex analysis result known as Cau hy's
prin iple of argument. The main feature of the Nyquist riterion is that when
applying Cau hy's argument to the open-loop plant, it provides information
about the stability of the losed-loop plant. The riterion applies to SISO sys-
tems. In addition, the result an also be interpreted in the Bode plot yielding
the gain margin and the phase margin, giving a measure on how far one is from
the stability limit.

5.4.5.1 Caughy's Prin iple Argument

Let f (s) be an analyti meromorphi 2 fun tion in a region R, that is f (s) := B(s)
AS
with z being the number of zeros and p the number of poles, en losed by a
ontour C en ir ling all poles and zeros in the Euler plane in , then
1 f ′ (σ) dσ
I
z − p := (5.40)
2 π i C f (σ)

This assumes that the ontour is oriented ounter- lo kwise and simple, that is
without self-interse tions.3
More generally, suppose ω is a urve, oriented ounter- lo kwise, then
I ′ !
1 f (σ) dσ X X
:= 2 π i n(ω, z) − n(ω, p) (5.41)
2 π i ω f (σ) z p

where the fun tion n(ω, k) is the winding number of ω around the point k .
The onsequen es are tha the winding number n about the origin for a losed
ontour ω entred on the origin is
n := z−p (5.42)

5.4.5.2 The Stability Criterion

The stability riterion is derived from using Cau hy's prin iple of arguments by
drawing up a ontur for the omplete right-half plane Figure 5.4. In doing so,
the overall transfer fun tion is split into two parts, namely the stable one and
the unstable one. This is possible be ause the polynomial transfer fun tions
an readily be fa torised a ordingly. A little problem arises from the fa t that
points at the stability limit are quite ommon, as these are the zero polses, the
property that goes with pure integrators, a ommon element in pro ess models.
This problem an though handled quite easily: Sin e the ontrour must be
analyti al at every point, poles on the imaginary axis are avoided by innitely
small semi- ir les.
2 Meormorphi is a on ept dened in the framework of omplex analysis. In simple words,
a meromorphi fun tion is the ratio of two well-behaved fun tions. This ration is well-behaved
itself ex ept than at spe ial points, where the denominator approa hes zero and the ratio has
poles. Thus polynomial transfer fun tions are ex usite examples for su h fun tions.
3 http://en.wikipedia.org/wiki/Argument_prin iple
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 103

ℑ(s)

s-plane
R→∞

ℜ(s)

r→0

Figure 5.4: Contour integral in the s-plane

Applying Cau hy's prin iple of arguments gives the desired result: It states, that
the number of unstable poles of the losed-loop system is equal to the number of
unstable poles of the open loop system plus the number of en ir lements of the
origin of the Nyquist plot of the omplex fun tion 1 + P C with P, C being the
transfer fun tions of the plant and the ontroller, respe tively. Most ommonly
this is modied by not arguing the fun tion 1 + P C , but P C , thus the open
loop transfer fun tion but now with en ir lements of the point −1 instead of
the origin. The zeros 1 + P C are the poles of the losed-loop system, whilst
the poles are the poles of the open loop system and the zeros of the losed loop
system:
P B
C B BP BC
PC
S := := APBAPCBC := AP
AP AC
AC +BP BC
(5.43)
1 + PC 1 + AP AC AP AC

5.4.5.3 The Simplied Criterion

Very often physi al plants are stable and have thus no poles in the right-half-
plane. As a onsequen e, the above statement simplies to that the -1 point
must not be en ir led.

5.4.5.4 Gain and Phase Margin

The −1-point is obviously important as the denominator of the losed-loop


SISO system is 1 + P (s) C(s). The zeros of the denominator are the roots of
104 CHAPTER 5. STABILITY

the equation 1 + P (s) C(s) = 0, whi h implies that P (s) C(s) = −1. Thus s
assumes the value of a root, the phase angle is −180o .
The distan e of the ross-over of the open-loop transfer fun tion P (s) C(s) to
the −1, gives a measure on how mu h the ontrolled system is away from the
stability limit. Obviously, in the ase of stable plants this ross over o urs
between the −1 point and the origin. The distan e on the real axis is alled
the gain margin. Equaly one an measure the angle being the argument of the
open-loop transfer fun tion, at the point where the magnitude of the transfer
fun tion is 1. This is alled the phase margin.

ℑ(s) pase margin: an-


gain margin: dis- gle between the lo-
tan e of the ross- ation the model
over point to the rosses the unit ir-
point -1 on the real le and the negative
axis real axis

ℜ(s)

−1

Figure 5.5: Gain and phase margins in the Nyquist plot

Usually the two measures are shown in the Bode plots. For example, given the
plant:
1
P := (5.44)
(0.5 s + 1) (0.8 s + 1) (0.1 s + 1)
C ∈ {5, 10, 50} (5.45)

the Bode plots show the three transfer fun tions, one for ea h P- ontroller, with
the stability margins Figure 5.5.
5.4. STABILITY OF LINEAR, CONTINUOUS SYSTEMS 105

40

20

0
Magnitude (dB)

−20

−40

−60

−80
0
prop gain :: 5
prop gain :: 10
prop gain :: 50

−90
Phase (deg)

−180

−270
−1 0 1 2
10 10 10 10
Frequency (rad/sec)

Figure 5.6: Gain and phase margins in the Bode plots


106 CHAPTER 5. STABILITY
Chapter 6

System Identi ation

6.1 Mat hing the Model to the Plant

There is really only one purpose for system identi ation, and that is to nd
an appropriate model for the modelled plant limited to the range of operating
onditions in whi h the identi ation experiments an be performed.
So why do these models have to be mat hed with, and what about these operat-
ing onditions? Mat hing is ne essary be ause the model is not a pre ise image
of the plant and it is not always su h that all the information about the plant's
behaviour is known in all details thought some of them may be ne essary to be
in luded in the model in order to meet the spe i ation one has dened for the
use of the model. Thus system identi ation is done to nd a model des ribing
the pro ess on the level of details required for the appli ation of the model.
Appli ation an be anything from just trying to understand the behaviour of
the system, to using it for design and operational tasks su h as ontrol.
What about the operating onditions? Plants, or any system for that matter,
must be disturbed, ex ited, as the spe ialist alls it, in order for the pro ess to
reveal his behaviour. For example, in order to nd out on how heavy something
is, one has to a elerate it or expose it to a gravitational eld. The same for any
other pro ess: it must be moved about in order to test out its behaviour. For
the purpose of identi ation one thus inje ts a well- ontrolled disturban e, an
ex itation signal, that moves the pro ess, that is, hanges its state. The model is
then fed with the same ex itation signal and the behaviour of the plant and the
simulated pro ess is ompared on the basis of whi h the model is hanged. The
model is hanged until its behaviour ts satisfa torily within the plant's range
of operation, whereby satisfa torily is determined by introdu ing a measure for
the dieren e between the plant and the model.
Pro ess identi ation has been subje t of resear h for as long as models are
dened. The re ent literature body in ludes the review paper of Åstr'om and
Eykho, the book by Eykho and the book on the subje t by Ljung (Ljung
(1987); Eykho (1974); Astroem and Eykho (1971)). The subje t has also

107
108 CHAPTER 6. SYSTEM IDENTIFICATION

been of interest in the statisti s ommunity in parti ular asso iated with pa-
rameter identi ation and signal pro essing.

6.2 Dening System Identi ation

We shall dene system identi ation as follows:


Given a set of models S := {Mi |∀i} where ea h of the models may belong to a
spe i lass of models, the system identi ation task is to nd the best model
in the dened set given re ords of input-output data D from the plant obtained
under operating onditions C where best is measured by the riterion J .
In the ase where the model set S onsists of stru turally dierent models, one
talks about system identi ation. In the ase where the set onsists of one
parameterised model with the varying parameters being the set generator, one
talks about parameter identi ation 1 .
Fitting best implies that system and parameter identi ation is an optimisation
problem. The measure must be suitable to be used in an optimisation and
onvexity is a desired property. The sum of squares of the deviation, where
deviation needs to be dened, is the most ommonly used riterion, though also
other norms are suitable for the purpose, the 2-norm being mathemati ally easy
to handle.
The fa t that the ne essary experiments an usually only be done in a limited
range of operating onditions is often not su iently appre iated, be ause the
identied model is stri tly speaking only valid for the range the model has been
validated, whi h usually oin ides with the range in whi h the model has been
identied. There is no guarantee on the extrapolation ability of the model,
even if the model is a me hanisti model. For dynami models the operating
onditions may be best hara terised by the frequen y range and the amplitude
range in whi h the identi ation experiments were performed. They dene some
kind of spe tral onditions, whi h for example in robust ontrol be ome very
handy to have available.
In any ase, the a ura y of the identied model should ultimately be judged in
the framework of the appli ation of the model. Thus for example if the model is
used for ontroller design, the performan e of the ontrolled pro ess should be
taken as the ultimate measure. This underlines the statement that the model is
being onstru ted for a parti ular purpose, a fa t that should be kept in mind
at all times.

1 The two things may overlap in that a parameter appearing as a fa tor in an expression
may eliminate the asso iated term from the model as this parameter assumes the value zero.
The zero takes thus a somewhat spe ial position when interpreting model stru tures. This
fa t is extensively used in network representations su h as neural nets
6.2. DEFINING SYSTEM IDENTIFICATION 109

u
plant y
ex itation plant's response

model (Θ) ŷ
model's response

parameters Θ

identi ation

Figure 6.1: The grand s heme of parameter identi ation: The


plant is ex ited with a su iently ri h signal to stimulate the in-
teresting modes of the plant. The same input is used to simulate
the model's behaviour. All three generated signals, namely the ex-
itation signal u, the plant's response y , and the model's response
ŷ is used to ompute an estimate of the model's parameters, whi h
then are used to update the model.

6.2.1 Consequen es
Having dened the task system identi ation, it is also apparent that the iden-
tied models are a fun tion of all the elements entering the pro edure: the date,
the set of models and the riterion.
The riterion provides the measure, thus the result is obviously dependent on
what measurement sti k is being used. The most ommon hoi e is the umula-
tive sum of squares, mostly be ause of its ni e mathemati al properties. Mostly
it serves the purpose well and most people would not even spend a thought on
the issue  thus mostly an un ons ious hoi e of onvenien e.
The model set has a rather obvious ee t on the result as the parameters are
stri tly speaking dened in the ontext of the model. Not having a model in the
set straightforwardly means that it is not being onsidered  obvious indeed,
but not in a hidden ontext.
The input, namely the ex itation signal being used for the identi ation period,
has a huge impa t on the result. This is probably the most often ignored. One
often uses test signals without being aware what one is a tually doing, whilst
it is not di ult to get a quite detailed insight when using at the frequen y
behaviour of the plant model. Figure 6.2 shows the frequen y behaviour of two
models for the same pro ess. The one with the steeper asymptote and higher
phase shift is the more omplex one. Assuming that the more omplex model
indeed des ribes the plant better, one observes that the simpler model does very
well up to about 1Hz. Above the phase hanges to the double quite qui kly. If
one thinks about identi ation, then one observes two major parameters, ea h
110 CHAPTER 6. SYSTEM IDENTIFICATION

Bode Diagram

0
10

−2
10
Magnitude (abs)

−4
10

−6
10

−8
10

−10
10 flow rates:
cycle 1 : 10
0
cycle 2 : 2
inflow : 1
−90
Phase (deg)

−180

−270

−360
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (Hz)

Figure 6.2: Bode plot of two models, a ompli ated and a simplied
one.

represented by a orner or a bend in the amplitude plot: One 310−2 Hz and


the other around 2 Hz for the omplex ase and about 1 Hz for the simplied
model. If one uses an input signal that is on the high end frequen y limited
around 0.01 Hz, none of the orners an be extra ted as the output signal will
be essentially the same as the input signal. Thus only the steady state value
an be obtained from this experiment. If one in reases the frequen y ontents
to0.1 or 1 Hz, one will see the ee t of the rst orner: The amplitude drops
and the output is shifted by about 90 degrees. However, the output signal will
have no information about the se ond orner and the beyond. In order to nd
this se ond orner, one has to experiment in the domain of 1 to 10 Hz at the
minimum. The frequen y ontents is thus essential for the pro edure and it is
re ommendable that one spend some time on designing the experiment so as to
ti kle the pro ess at the right point, so-to-speak.

6.3 Models
With models being the main obje tive, it is put into the entre, whilst the
methodologies asso iated with identi ation is put into the se ond pla e as it is
extensively treated in the literature; for example Ljung (1987); Eykho (1974);
Astroem and Eykho (1971).
Models are typi ally lassied using attributes su h as linear, nonlinear, sto has-
6.3. MODELS 111

ti , parameterised, dis rete and ontinuous.


But what does linearity mean, for example. Most ommonly the term linearity
is used in onne tion with the state, more pre isely: linear-in-the-state systems,
as one is primarily interested in the evolution of the state that is essentially sim-
ulation. In identi ation, one is mostly interested in linear-in-the-parameters,
as it is the parameters one is solving for. Thus if one is interested in the pa-
rameters, the nonlinearities in the state are usually quite manageable, whilst if
one is interested in using the model for ontrol, nonlinearities represent a major
obsta le.
Literature uses often the term parameterised and the opposite un-parameterised
for model lassi ation. This attribute is not seen as a very des riptive and we
rather use data-driven instead of un-parameterised.
Dis rete and ontinuous models: On the ma ros opi s ale nature is well ap-
proximated by ontinuous systems, that is, the state is a ontinuous fun tion of
time and spatial o-ordinates.

6.3.1 Data-driven Models


Data models ome as input-output data or series. As su h typi al system re-
sponses belong into this lass su h as impulse response and step response being
the two main ones.
The impulse response is the hemi al engineer's residen e time distribution.
Numeri ally onvoluting the impulse response with an input series results the
response of the system.
Assuming dis retely hanging inputs one an apply the step response for ea h
time step to obtain the response of the system. The step response as a model
is extensively used in model predi tive ontrol appli ations. The ba kground is
linear systems.
Using fast Fourier transform te hniques, one an use tabled information about
the transfer fun tion to obtain input/output data.

6.3.2 Spe ial Forms


6.3.2.1 Hammerstein Model

Nonlinear input transformation followed by a linear dynami system.

6.3.2.2 Wiener Model

Linear dynami system followed by a nonlinear output transformation.


112 CHAPTER 6. SYSTEM IDENTIFICATION

6.3.2.3 Stati L-i-P (Linear-in-Parameters) Models

Most ommon form dis ussed in identi ation: linear regression


A rather generi formulation of the lip model is multi-input, single output:
y := f T (u) Θ , (6.1)
where the ve tor of fun tions f may be nonlinear in the input ve tor u but linear
in the parameter ve tor Θ.
The nonlinearity of the fun tion f of u is virtually arbitrary. The most ommon
stru tures being used are polynomials and exponentials. For example:
f T (u) := [ur ]r:=1,2,,1/2,1/3... , (6.2)
or
f T (u) := [ur uj ]r,j . (6.3)

6.4 Point Estimators


Estimators are rules on how to ompute the parameter of a model in luding
the parameters of the nominal model, the non-sto hsti part, and the asso iated
models of the sto hasti variables, namely the distribution fun tions. For the
mathemati al denition of an estimator we dene a random variable Y whi h
is dened on a probablility spa e (see Se tion 7.8). An estimator, denoted by
f (Y ) is dened as a fun tion of the random variable. Applying the estimator to
a parti ular set of experimental out omes, being the data y , the orresponding
estimate is obtained, being on of the parameters Θ of the nominal model or the
asso iated distribution fun tions. 2
Property - Unbiased : An estimator is alled unbiased if
θ = θ̂ := E [f (Y )] (6.4)

Property - Uniformly minimal mean square error : An estimator fi (Y )


for a parameter θ is said to be uniformely minimal mean square error if
T
≤ E (fj (Y ) − θ) (fj (Y ) − θ)T (6.5)
   
E (fi (Y ) − θ) (fi (Y ) − θ)

for all estimators in the set {f }


Denition - Minimum varian e unbiased estimator MVUE : Estimator
that is unbiased and has the property uniformly minimal means square error.
Denition - Best linear unbiased estimator BLUE : MVU Estimator
that is a linear fun tion of the data.
It is often not feasible to nd a MVUE or a BLUE estimator, but usually su ies
to use an estimator that approa hes the lower varian e bound dened by the
Cramer-Rao inequality;
2 The following follows losely the book of Goodwin and Payne (1977)
6.4. POINT ESTIMATORS 113

Theorem 7 (Cramer-Rao inequality). Let {Pθ } be a family of distributions


on a sample spa e Ω with the density pY |θ , then, subje t to some regularity
onditions, the ovarian e V(f ) of any unbiased estimator f (Y ) of θ satises
the inequality
V(f ) ≥ M−1θ
(6.6)

with V(f ) = E [f i (Y ) − θ] [f i (Y ) − θ]T and were the matrix M−1 , alled the
 
θ
Fisher information matrix, is dened by
" T  #
∂ log p(Y |θ) ∂ log p(Y |θ)
Mθ := E (6.7)
∂θ ∂θ

Proof. f (Y ) is an unbiased estimator of θ, thus:


E [f (Y )] =θ (6.8)
i.e.
Z
f (y) p(y|θ) dy = θ (6.9)
ZΩ

f (y) p(y|θ) dy = I (6.10)
∂θ Ω
(6.11)
Assuming regularity under the integral
∂p(y|θ)
Z
f (y) dy = I (6.12)
Ω ∂θ
∂ log p(y|θ)
Z
f (y) p(y|θ) dy = I (6.13)
Ω ∂θ
 
∂ log p(Y |θ)
E f (y) =I (6.14)
∂θ
Also we have:
  Z
∂ log p(Y |θ) ∂ log p(y|θ) ∂p(y|θ)
Z
E = p(y|θ) dy = dy (6.15)
∂θ Ω ∂θ Ω ∂θ
∂ ∂
Z
= p(y|θ) dy = (1) = 0T (6.16)
∂θ Ω ∂θ

With the Equation (6.8) and Equation (6.16), the ovarina e of ∂ log∂θ p(Y |θ)
and
f (Y ) is
    
[f (Y ) − θ] V(f ) I 
 
E   [f (Y ) − θ]T ∂ log∂θ
p(Y |θ)  = 
   (6.17)
∂ log p(Y |θ)
∂θ I M θ

whi h is learly non negative sin e it is a ovarian e matrix. Thus


  
h i V(f ) I  I 
I, −M−1 ≥0 (6.18)

θ
 
I Mθ −M−1θ
114 CHAPTER 6. SYSTEM IDENTIFICATION

yielding
V(f ) − Mθ−1 ≥ 0 (6.19)

Property - E ien y : The unbiased estimator is e ient if its ovarian e is


equal the Cramer-Rao bound, i.e. the inverse of the Fisher information matrix.
Theorem 8. Subje t to regularity onditions, there exists an e ient unbiased
estimator for θ if and only if we an express ∂ log∂θ
p(Y |θ)
in the form
 T
∂ log p(Y |θ)
= A(θ) [f (y) − θ] (6.20)
∂θ
where A(θ) is a matrix not depending upon y

Proof. Su ien y: Assume the theorem holds then Equation (6.17) be omes:
  
(f (Y ) − θ)
 
E 
  
 (f (Y ) − θ) A(θ) [f (y) − θ] 
A(θ) [f (y) − θ]
  (6.21)
T
 V(f ) V(f ) A (θ)
= 
A(θ) V(f ) Mθ

whi h from Equation (6.17) is:


 
V(f ) I 
=  (6.22)
I Mθ

whi h gives

A(θ) V(f ) = I (6.23)

and

A(θ) V(f ) AT (θ) = Mθ (6.24)

hen e

V(f ) = Mθ−1 (6.25)

Ne essity: Assume Equation (6.25) then from Equation (6.17)


    
−1
[f (Y ) − θ] Mθ I 
 
E   [f (Y ) − θ]T ∂ log∂θ
p(Y |θ)  = 
   (6.26)
∂ log p(Y |θ)
∂θ I M θ
6.4. POINT ESTIMATORS 115

h i h iT
Premultiplying with Mθ , −I and postmultiplying with Mθ , −I gives:
""  T #
∂ log p(Y |θ)
E Mθ [f (Y ) − θ] − ×
∂θ

"  T #T
∂ log p(Y |θ)
× Mθ [f (Y ) − θ] −  =0 (6.27)
∂θ

Consequently  
∂ log p(Y |θ)
Mθ [f (Y ) − θ] = (6.28)
∂θ
whi h proves the theorem.
Corollary (8.1). The proof also reveals that if the theorem applies then
A(θ) = Mθ , the Fisher information matrix.

6.4.1 Least-Squares Estimator and L-i-P Models


6.4.1.1 Getting the Best Parameters

Let the instan e of the multiple-input, single-output, l-i-p model Equation (6.1)
be:

ŷi := f T (ui ) Θ ∈ R1 , (6.29)

with Θ ∈ Rk . We assume having n instan es of input-output experimental data


available.
To ondense the equations, we sta k the n input-output instan es up:

ŷ := [ŷi ]∀i , (6.30)


h i
F := f T (ui ) ∈ Rn×k , (6.31)
∀i

in order to get:

ŷ := FΘ ∈ Rn . (6.32)

Let in addition the observation orresponding to the input ui be yi , whi h we


also sta k up:

y := [yi ]∀i . (6.33)

In order to dene the ost fun tion, we rst dene the error as the dieren e
between the response of the plant and the response of the model to the ex itation
signal applied to both identi ally:

e(Θ) := y − ŷ(Θ) , (6.34)


:= y − F Θ , (6.35)
116 CHAPTER 6. SYSTEM IDENTIFICATION

and the ost fun tion being the Q-weighted sum of squares:

J(Θ) := eT (Θ) Q e(Θ) , (6.36)

with Q being a positive semi-denite weighting matrix.

The regression problem is then an optimisation problem by dening the opti-


mal parameter being the one that minimises the ost fun tion, thus leads to
a minimal sum of square error. Let the optimal solution be marked with a ˆ.
Then:

∂J(Θ)
:= 0 (6.37)
∂Θ Θ̂
 !T 
∂e(Θ)
0 := 2  Q e(Θ) (6.38)
∂Θ
Θ̂
 !T 
∂ŷ(Θ)
0 :=  Q e(Θ) (6.39)
∂Θ
Θ̂
 
T
0 := F Q y − F Θ̂ (6.40)
0 := FT Q y − FT Q F Θ̂ . (6.41)

The equation Equation (6.40) is also alled the normal equation. It is also alled
the stating that the error is orthogonal to the fun tion of the input, thus no
more information an be extra ted from the input.
Re-arranging to solve for the parameter ve tor gives:
 −1
Θ̂ := FT Q F FT Q y . (6.42)

6.4.1.2 Ee t of Measurement Noise

Measurement noise is one of the most ommon problems with measured data.
Making a ouple of assumptions, it is straightforward to estimate the ee t of
the measurement noise on the estimated parameters.
Let the additive measurement error be v , then the key assumptions are:

1. inputs are un orrelated,


2. E(v) := 0 :: mean error is zero,
3. var(v) := I σ 2 with σ being the varian e of the error distribution.

Simplifying the writing of the unit-weighted estimator:


 −1
Θ̂ := FT F FT y , (6.43)
Θ̂ := Sy. (6.44)
6.4. POINT ESTIMATORS 117

The estimated varian e of the parameter ve tor is

(6.45)

var (Θ) := var S y

:= S var y ST (6.46)


:= S I σ S 2 T
(6.47)
T
:= S S σ 2 (6.48)
 −1  −1
:= FT F FT F FT F σ2 (6.49)
 −1
:= FT F σ2 . (6.50)

The result is a symmetri matrix alled the varian e- ovarian e matrix, the di-
agonal being the varian es and the o-diagonal the respe tive ovarian es. The
ovarian e implies that a hange in the expe tation (average) of one parame-
ter will also hange the orrelated parameter in the dire tion and magnitude
indi ated by the respe tive ovarian e. As a normed measure one uses the or-
relation.

6.4.1.2.1 Correlation

The orrelation matrix is the varian e- ovarian e matrix normed by the vari-
an es:
" #
cov(Θi Θj )
R := 1/2
, (6.51)
(var (Θi ) var (Θj )) ∀i,∀j
 
cov(Θi Θj )
:= , (6.52)
σi σj ∀i,∀j
:= [ri,j ]∀i,∀j . (6.53)

The orrelation varies between -1 and 1 being ompletely negative or ompletely


positively orrelated. In most appli ations orrelation is an undesired property
and large orrelation an be an indi ation of pure experimental design.

6.4.1.3 Expe ted A ura y

One an use the estimated parameters to predi t the behaviour of the plant for
a parti ular instan e. Let the instan e be

yi := f T (ui ) Θ + v . (6.54)

where v is a measurement error that is normally distributed and has a zero


mean. Under these onditions, the varian e is:
 
T
var (ŷi ) := var f (ui ) Θ + var (v) (6.55)
:= f T (ui ) var (Θ) f (ui ) + var (v) (6.56)
  −1 
T T
:= f (ui ) F F f (ui ) + 1 σ 2 . (6.57)
118 CHAPTER 6. SYSTEM IDENTIFICATION

If we repeat the experiment m times, we an improve the estimate of the vari-


an e:
 
T

T
−1 1
var (ŷi,m ) := f (ui ) F F f (ui ) + σ2 . (6.58)
m

Given the varian e, the onden e limits are:


s 
 −1
T T
yi := f (ui ) Θ ± 2 σ f (ui ) F FT
f (ui ) + 1 . (6.59)

If one estimates the varian e, the 2 is repla ed by the respe tive value from the
student t-distribution with the appropriate degrees of freedom and the hosen
onden e limit.

6.4.1.4 Conden e Limits for Parameters

Having found the best parameters poses the question on how ondent one an
be in them. So how does the ost fun tion hange with the parameters?
Let the ost fun tion be the identity-weighted version as given in equation Equa-
tion (6.36), then its hange with the parameters is:
J(Θ) := eT (Θ) e(Θ) , (6.60)
T 
:= y − F Θ y − FΘ ,
   T    
:= y − F Θ̂ − F Θ − Θ̂ y − F Θ̂ − F Θ − Θ̂
 T  
:= y − F Θ̂ y − F Θ̂
 T    T  
− y − F Θ̂ F Θ − Θ̂ − Θ − Θ̂ FT y − F Θ̂
  T  
+ F Θ − Θ̂ F Θ − Θ̂ ,
 T  
:= J(Θ̂) + Θ − Θ̂ FT F Θ − Θ̂ , (6.61)

where we used the fa t of equation Equation (6.40) twi e for the middle terms.
Thus
 T  
J(Θ) − J(Θ̂) := Θ − Θ̂ FT F Θ − Θ̂ . (6.62)

This is an ellipsoide in the parameter spa e. The length of the axis is given by
the eigenvalues of the matrix C := FT F whilst the eigenve tors, whi h, due to
the spe tral theorem, are orthogonal, determine the dire tion. 3
The α onden e limits of the parameters are given by the orresponding value
of the F-distribution:
J(Θ) − J(Θ̂) ≤ k s2 Fk,n−k (α) . (6.63)
T
3 Sin e C is symmetri , C = V Λ V−1 = CT

= V Λ V−1 Thus VT = V−1 the
T T T
quadrati form xT F Fx an be rewritten as xT V Λ V x := zT Λz with z := V x.
6.4. POINT ESTIMATORS 119

The varian e an be estimated from the ost fun tion:


eT (Θ) e(Θ)
s2 := , (6.64)
n−k
where n is the number of observations and k the number of estimated parame-
ters.

6.4.1.5 How Good is the Identied Model: Varian e Analysis

Be ause of the experimental errors one will not get the same response from the
plant when repeating an experiment using the same input. If the responses
are within the limits of the expe ted output error, one has no reason to be
suspi ious about the model appropriately des ribing the pro ess. If one tries
to t the same data a more omplex model, one will nd no improvement.
Naturally if one performs more experiments, it may show that the model is
indeed not the best one an nd. Latter aspe t is used to design experiment
fo using on the weak parts of the model.
The means to he k on the model is to analyse the varian e for the various
ontributions. Again we start with the sum of squares of the error, the ost
fun tion Equation (6.36) whi h we expand:
T
eT e := y − F Θ (6.65)

y − FΘ
:= yT y − ΘT FT y − yT F Θ + ΘT FT F Θ (6.66)
:= yT y − ΘT FT F Θ − ΘT FT F Θ + ΘT FT F Θ (6.67)
T
:= y y − Θ F F Θ T T
(6.68)
(6.69)
Isolate the total sum of squares over the outputs:
yT y := ΘT FT F Θ + eT e . (6.70)

The total sum of squares is thus the sum of the regression sum of squares plus
the rest sum of squares. Ea h of these terms is onne ted to a degree of freedom
being used to ompute the respe tive term. The sum of squares of the outputs
uses n :: number of observations. The regression sum of squares is omputed
from k :: number of parameters normal equations. Thus the dieren e n − k is
are the left degrees of freedom for the rest sum of squares.
It is ustomary to show this in a table:

SSQ DOF
total SSQ yT y n
regression SSQ Θ FT F Θ
T
k
rest SSQ eT e n−k

T
e e
One an show that if n−k estimates the varian e of the experimental error,
then the model is des ribing the pro ess appropriately. If we take the rest SSQ
120 CHAPTER 6. SYSTEM IDENTIFICATION

divided by the respe tive degrees of freedom as an estimate for the varian e,
thus
eT e
s2e := , (6.71)
n−k
and knowing the a tual varian e of the experimental error to be σ 2 then the
ratio of the estimated varian e and the n − k s aled varian e is χ2 distributed
thus algebrai ally:
n−k
s2 ∼ χ2n−k . (6.72)
σ2
One has good reasons do de lare the model as not tting well and thus re onsider
its stru ture if :
s2 χ2 (α)
> , (6.73)
σ2 n−k
α being the onden e limit.

6.4.1.5.1 Not knowing the varian e

The varian e of the experimental error is usually not known and must be esti-
mated from the data. Assuming that we make ni experiments for the input ui
and obtain a orresponding set of responses yi and repeat the experiments by
varying i := 1, . . . , q , then the estimate for the varian e is omputed by :
Pq  2
i=1 y − E y
2
se := Pq (6.74)
i=1 (ni − 1)
Pq  2
i=1 y − E y
:= Pq . (6.75)
i=1 ni − q

The −1 thus
 the redu tion of the degree of freedom by one is due to the mean
being E y , whi h is al ulated from the same data. So for the s2e total degree
of freedom is:
q
X
ne := ni − q (6.76)
i=1

If the model ts well, then the experimental error is also estimated by the rest
sum of squares. The two varian e estimates an be ompared with ea h other
as one an show that their ratio is F-distributed with the respe tive two degrees
of freedom.
If the ratio gets too large the model does not t well and one may onsider the
model to be a bad t:
1 eT e
> F (n − k, ne ) . (6.77)
n − k s2e

The above test assumes that the varian e is estimated with one set and the
parameters with another. It is, though, meaningful to use all experiments for
the regression and split the varian e a ordingly:
6.4. POINT ESTIMATORS 121

sour e deviation SSQ DOF average SSQ


ΘT FT F Θ
regression ΘT FT F Θ k k
Pq  2 P
E
eT e− qi=1 (y− [y])
2

la k of t eT e − i=1 y−E y n − k − ne n−k−ne


Pq  2 Pq
E
i=1 (y− [y ])
2

pure error i=1 y−E y ne n−k

total SSQ yT y n
Dening the varian es:
Pq  2
eT e − i=1 y−E y
s2ef := , (6.78)
n − k − ne
Pq  2
i=1 y−E y
s2e := , (6.79)
ne
the la k of t test is then:
s2ef
≤ Fn−k−ne ne (6.80)
s2e
(6.81)
to a ept the model.

6.4.1.5.2 How to pro eed

Identi ation is an iterative pro ess. One ts a model, he ks if it ts well
and if not, modies the model until one is satised. The la k-of-t measure
is thus used as the de ision riterion if or if not a modied model should be
adopted: The la k-of-t for the new model ompared to the old model must be
statisti ally signi ant better. An appropriate F-test provides the information.

6.4.1.6 Bias

Under ertain ir umstan es the estimator will not deliver the desired result, but
an estimate that is ontaminated with a bias. With E [Θ] being the estimated
parameters and Θ the true parameter values, a biased estimator is dened as:
Θ̂ := E [Θ] := Θ+b. (6.82)
If b is not equal zero, the estimator is alled biased, otherwise the estimator is
unbiased.

6.4.1.6.1 Bias due to omitted variables

This is unfortunately a very ommon ase, as one often does not know what
variables do ae t the output of the plant. The linear model that one identies
thus may not in lude all those variables and the ee t is a bias in the estimate.
To show the ee t, let the plant be represented by:
z := f T (u) Θ + gT (u) Θ . (6.83)
122 CHAPTER 6. SYSTEM IDENTIFICATION

The model to be tted shall be identi al to the rst term of the plant, thus the
se ond term is the omitted one, whi h we abbreviate as v .
y := f T (u) Θ . (6.84)
Consequently one an write the plant output as:
z := y + v , (6.85)

Using the model as a basis and the analogue sta king of the individual experi-
ment instan es, the estimator Equation (6.42) is
 −1
Θ̂ := FT F FT z (6.86)
 −1
FT F FT y + v (6.87)

:=
 −1
:= Θ + FT F FT v . (6.88)

Clearly the se ond term is now the bias of the estimate.

6.4.1.6.2 Bias due to orrelation in output noise

Asymptoti ally the bias is given by Astroem and Eykho (1971)


h i  h i−1 h i
T T
E Θ̂ − Θ := E F F E F e . (6.89)

where e is the output error.

6.4.1.6.3 Bias due to input noise

Bias is also introdu ed into the parameter estimation if the input has a sto hasti
omponent. The mathemati al treatment of this ase is rather involved and
losely linked to the derivation of the Kalman lter.

6.4.1.7 Instrumental Variables

The least squares estimator an be obtained from the model


y := F(u) Θ + e (6.90)
(6.91)
by multiplying both sides of the error-free model with FT :

FT y := FT F Θ . (6.92)

The estimate will be unbiased if the term FT e has zero mean, whi h is not the
ase when the error is orrelated. The instrumental variable method repla es
the FT matrix by an instrumental variable matrix WT in above's manipulation.
It is a matrix whi h is a fun tion of the data with the properties
h i
E W F
T
:: not singular (6.93)
h i
T
E W e := 0 . (6.94)
6.5. SELECTED DYNAMIC SYSTEMS 123

The orresponding estimator is


WT y := WT F Θ (6.95)
 −1
Θ := WT F WT y , (6.96)

whi h is unbiased.

6.4.1.7.1 Choi e of instruments

For dynami systems (Se tion 6.5), most ommonly a ltered input is used as
instrument where the lter's dis rete transfer fun tion may be
D(q)
g(q) := . (6.97)
C(q)
An attra tive alternative is the modulating fun tion lters introdu ed by Maletinsky
(1978); Preisig (1984); Preisig and Rippin (1993a,b).

6.4.2 Maximum Likelihood Estimator


The maximum likelihood estimator sele ts the most likely parameter (Box and Tiao,
1973; Ljung, 1987). The approa h is based on Bayes' theorem Equation (7.95).
Given the ve tor of observations y := {yi } the joint density fun tion is p(y , Θ)
that depends on a ve tor of parameters Θ. This density may be interpreted in
two ways:
p(y, Θ) := p(y|Θ) p(Θ) , (6.98)
:= p(Θ|y) p(y) . (6.99)
The onditional distribution of Θ is:
p(y|Θ) p(Θ)
p(Θ|y) := . (6.100)
p(y)
The denominator an be rewritten as:
(R
p(y|Θ) p(Θ), Θ ontinuous
p(y) := (6.101)
p(y|Θ) p(Θ), Θ dis rete
P

p(Θ) is denoted as prior probability, p(Θ|y) as posterior probability and p(y|Θ


as likelihood.
In ontrast to the least squares method, the maximum likelihood method as-
sumes the parameter to be distributed and not the measurement depending on
the parameter. This assumption is exa tly inverted for the least squares method
(Johnston and DiNardo, 1997; Ko h, 2007; Box and Tiao, 1973).

6.5 Sele ted Dynami Systems


In this se tion two ommonly used dynami models are being introdu ed, whi h
then are extended to a generi transfer-fun tion model, whi h aptures a large
124 CHAPTER 6. SYSTEM IDENTIFICATION

family of models. In terms of overall stru ture, one distinguishes between three
stru tures omputing the error in the three dierent ways: 1) equation error, 2)
output error, 3) input error.

6.5.1 Auto-Regressive-eXtra-input (ARX) Model


The ARX model is an equation error model and is given by the following dis-
rete polynomial representation (Ljung (1987)) using the shift operator q Se -
tion 4.1.1.3:

A(q) y(k) := B(q) u(k) + e(k) , (6.102)

or
B(q) 1
y(k) := u(k) + e(k) (6.103)
A(q) A(q)

with:
X
A(q) := 1+ ai q −i , (6.104)
A
X
B(q) := bi q −i , (6.105)
B
A := {i := 1, . . . , n} , (6.106)
B := {i := 1, . . . , m} , (6.107)

and e denoting the error signal. 4

The ARX a ronym derives from the statisti s literature labelling the dierent
terms with:

AR A(q) y(k) Auto-Regressive

X B(q) u(k) eXtra input 5

This model an be ast is linear in the parameters and results a standard linear
regression problem. Let:6

Θ := [[ai ]A ; [bi ]B ] (6.108)


[−q −i ]A y(k); [q −i ]B u(k) , (6.109)
 
z(k) :=

then the model an be written in the form:

ŷ(k|Θ) := zT (k) Θ . (6.110)


4 The notation used here is ompa ted in that only the index sets are shown implying the
operation to run over the index set for the implied index. For example the summation denoted
P Pn
by A−′ ai stands short for i=1 given the denition of A above.
5 in e onomi s this term is also referred to as exogenous input
6 The notation used here is ompa ted following the same idea: [q −i ] stands for [q −i ]
A ∀i
or [1, q −1 , q −2 , . . . , q −n ]
6.5. SELECTED DYNAMIC SYSTEMS 125

6.5.2 Auto-Regressive-Moving-Average-eXtra-input (AR-


MAX) Model
The ARX model has a very simple stru ture with respe t to the error. The
ARMAX model extends this by dening also dynami s for the error. For this
purpose an additional polynomial is introdu ed (Ljung (1987)):
X
C(q) := ci q −i , (6.111)
C
C := {i := 1, . . . , o} , (6.112)

whi h is used to dene the ARMAX model:


B(q) C(q)
y(k) := u(k) + e(k) . (6.113)
A(q) A(q)

The parameter ve tor is orrespondingly expanded too:

Θ := [[ai ]A ; [bi ]B ; [ci ]C ] . (6.114)

In the ARX ase we ould ast the parameter estimation problem into a simple
linear regression form. In order to nd a similar form, we rst have to onstru t
an estimate for the output for the ARMAX pro ess. For this derivation we
ompa t the notation:

y(k) := G(q) u(k) + H(q) e(k) , (6.115)

with
B(q)
G(q) := (6.116)
A(q)
C(q)
H(q) := (6.117)
A(q)
X∞
:= 1+ hi q −i . (6.118)
i:=1

The varian e of the error is thus s aled su h that the H(q) polynomial is moni ,
d.h. the leading oe ient is 1. We further dene:

v(k) := H(q) e(k) . (6.119)

Thus:

!
X
v(k) := e(k) + hi q −i e(k) , (6.120)
i:=1
:= e(k) + (H(q) − 1) e(k) . (6.121)

The expe tation of the v(k) given the data at k − 1 is then:

v̂(k|l − 1) := E [v(k)|k − 1] , (6.122)


:= E [e(k)] + E [(H(q) − 1) e(k)] . (6.123)
126 CHAPTER 6. SYSTEM IDENTIFICATION

With the data being known at the time k − 1, the se ond term is a tually know
and the expe tation of the error is zero, thus
v̂(k|l − 1) := (H(q) − 1) e(k) , (6.124)
:= (H(q) − 1) H −1 (q) v(k) , (6.125)
:= 1 − H −1 (q) v(k) . (6.126)


Now we an assemble the expression for the expe ted output:


ŷ(k|Θ) := G(q) u(k) + v̂(k|l − 1) , (6.127)
G(q) u(k) + 1 − H −1 (q) v(k) , (6.128)

:=
−1
(6.129)

:= G(q) u(k) + 1 − H (q) (y(k) − G(q) u(k)) ,
G(q) u(k) + 1 − H (q) y(k) − 1 − H (q) G(q) u(k) (6.130)
−1 −1
 
:= ,
−1 −1
(6.131)

:= 1 − H (q) y(k) + H (q) G(q) u(k) .
Substituting the two polynomials
 
A(q) A(q) B(q)
ŷ(k|Θ) := 1− y(k) + u(k) (6.132)
C(q) C(q) A(q)
 
A(q) B(q)
:= 1− y(k) + u(k) . (6.133)
C(q) C(q)
we get the one-step predi tor for the ARMAX model.
Some more manipulations: First multiply with the C(q) polynomial:
C(q) ŷ(k|Θ) := (C(q) − A(q)) y(k) + B(q) u(k) . (6.134)
Extend on both sides:
C(q) ŷ(k|Θ) + (1 − C(q)) ŷ(k|Θ) :=
(C(q) − A(q)) y(k) + B(q) u(k) + (1 − C(q)) ŷ(k|Θ) . (6.135)
Simplify the left-hand side rst and expand the right-hand side aiming at an
expression of the predi tion error
ǫ(k, Θ) := (y(k) − ŷ(k|Θ)) : (6.136)

ŷ(k|Θ) := (C(q) − A(q)) y(k) + B(q) u(k) +


(1 − C(q)) ŷ(k|Θ) − y(k) + y(k) (6.137)
:= B(q) u(k) + (1 − A(q)) y(k) + (C(q) − 1) (y(k) − ŷ(k|Θ)) , (6.138)
:= B(q) u(k) + (1 − A(q)) y(k) + (C(q) − 1) ǫ(k, Θ) . (6.139)
The estimated output an thus be written in the form:
ŷ(k|Θ) := zT (t, Θ) Θ , (6.140)
with :
Θ := [[ai ]A ; [bi ]B ; [ci ]C ] (6.141)
[−q −i ]A y(k); [q −i ]B u(k); [q −i ]C ǫ(k, Θ) , (6.142)
 
z(k, Θ) :=

ŷ(k|Θ) := zT (k, Θ) Θ . (6.143)


whi h is a nonlinear relation, though looking very mu h like the linear regression
model we had for the ARX model. This form is alled pseudo-linear regression.
6.5. SELECTED DYNAMIC SYSTEMS 127

6.5.3 General Transfer Fun tion Model Stru tures


Along this line a generi transfer model an be suggested (Ljung (1987)) :
B(q) C(q)
A(q) y(k) := u(k) + e(k) . (6.144)
F (q) D(q)

The one step predi tor for this generi model, analogue to Equation (6.133), is:
 
D(q) A(q) D(q) B(q)
ŷ(k|Θ) := 1− y(k) + u(k) . (6.145)
C(q) C(q) F (q)
The following table, also taken from Ljung (Ljung (1987)) shows the models
and their names depending on what polynomials are used in the general model:

polynomial model name

B FIR nite impulse response

A, B ARX auto regressive with extra input

A, B, C ARMAX auto regressive moving average with extra in-


put

A, C ARMA auto regressive moving average

A, B, D ARARX 2 (auto regressive) with extra input

A, B, C, D ARARMAX 2 (auto regressive) moving average with extra


input

B, F OE output error

B, F, C, D BJ Box-Jenkins

This model an also be ast into the pseudo-linear regression form. Again
dening the error:
ǫ(k, Θ) := (y(k) − ŷ(k|Θ)) (6.146)
one nds:
 
D(q) B(q)
ǫ(k, Θ) := A(q) y(k) − u(k) . (6.147)
C(q) F (q)
Introdu ing the variables:
B(q)
w(k, Θ) := u(k) (6.148)
F (q)
v(k, Θ) := A(q) y(k) − w(k, Θ) , (6.149)
this simplies to:
D(q)
ǫ(k, Θ) := v(k, Θ) . (6.150)
C(q)
128 CHAPTER 6. SYSTEM IDENTIFICATION

With :

Θ := [[ai ]A ; [bi ]B ; [ci ]C ; [di ]D ; [fi ]F ] (6.151)


[−q −i ]A y(k) ; [q −i ]B u(k) ; [q −i ]C ǫ(k, Θ) ;

z(k, Θ) :=
[−q −i ]D v(k) ; [−q −i ]F w(k) , (6.152)


one has the model again in the pseudo-linear regression Equation (6.143) form.

6.6 Kalman Filter in Identi ation


The Kalman lter has been giving its name be ause the te hnique got most
attention after being published by Rudolf E Kalman but the basi idea has
been worked on by several people also earlier. This in ludes mainly Bu y, who
is often also in luded in the name of the lter but rarely people like Ruslan L.
Stratonovi h and others7 . For the sake of briefness it shall be alled Kalman
lter in the ontinuation.
The lter and its derivation is interesting as it solves an old problem formulated
by Wiener, namely the issue of having sto hasti omponents a tive at the input
of a dynami system. For linear systems the Kalman lter solves this problem
for sto hasti omponents ex iting the input an the output independently and
both distributions being at least symmetri al.
The model being used for a dis rete system is:

x(k + 1) := Φ x(k) + Γ u(k) + w(k) , (6.153)


y(k) := C x(k) + v(k) . (6.154)

Where the sto hasti omponents are here assumed to be Gaussian:


 
w ∼ N 0, Q , (6.155)
(6.156)

v ∼ N 0, R .

The derivation of the lter an be done in many dierent ways, in luding the
orthogonality prin iple, Bayes' theorem, sequential minimal sum of squares,
gradient sear h method for sum of squares and others. We shall not derive the
lter, but refer the interested reader to the literature for example Jazwinski
(1970) whi h is still one of the books with the most thorough treatement of this
subje t.
The Kalman lter works in two steps:
Predi tion: of the state and the estimate's ovarian e

state x̂(k|k − 1) := Φ x̂(k)(k − 1|k − 1) + Γ u(k − 1) , (6.157)


ovarian e P(k|k − 1) := Φ P(k)(k − 1|k − 1) ΦT + Q(k − 1) (.6.158)
7 see for example Wikipedia http://en.wikipedia.org/wiki/Kalman_lter for more informa-
tion on this subje t
6.6. KALMAN FILTER IN IDENTIFICATION 129

Update: of measurement residuals, ovarian e of measurement residuals;


omputation of the Kalman gain being used to update the state estimate and
the ovarian e of the state estimate:

residual e(k|k − 1) := ŷ(k) − C x̂(k)(k|k − 1) , (6.159)


T
residual ovarian e S(k) := C P(k|k − 1) C + R(k) , (6.160)
Kalman gain K(k) := P(k|k − 1) CT S−1 (k) , (6.161)
state estimate x̂(k)(k|k) := x̂(k)(k|k − 1) + K(k) e(k|k − 1) (6.162)
,
estimate ovarian e P(k|k) := I − K(k) C P(k|k − 1) . (6.163)


pro ess w v

u x
Γ 1 C
q

y
Φ

e
K

−1


Γ 1 C ŷ
q

Φ lter

Figure 6.3: The Kalman lter in a blo k diagram

The Kalman lter is a state variable lter, meaning that the output of the lter
provides an estimate of the state given a ve tor of observations. The main
use of this lter is to re onstru t the state from a set of measurements. It is
thus also alled an observer. There other observers known in the literature, in
parti ular the Luenberg observer, whi h diers from the Kalman lter mainly by
having a xed gain. The gain of the Luenberg observer is designed by setting
the dynami s of the error propagation setting the poles of the set of linear
dierential equations that evelove from the derivation of the residual error .
130 CHAPTER 6. SYSTEM IDENTIFICATION

6.6.1 Extetended Kalman Filter


The extended Kalman lter is using a linearised version of the nonlinear model
for the omputation of the varian e propagation, whilst the predi tion step is
done with the nonlinear model. This idea was extended for the use in parameter
identi ation in that the state of the system is extended by the parameters to
be estimated setting the dynami s of these new states to zero. The te hnique
suers from a number of problems, whi h are mostly asso iated onvergen e and
providing a good, or better, workable estimate of the varian e- o-varian e ma-
tri es for the a tual state, the parameters and the measurements. The literature
is orrespondingly ri h on modi ations to this s heme in luding onstraining
the parameters, introdu tion of forgetting fa tors, xed matri es for the Kalman
gain et . et .

6.7 The Ex itation Signal


The behaviour of the plant an only be observed if it is moving, meaning it
has to be disturbed, or ex ited by applying an ex itation to the plant. The
ex itation must be hosen su h that the plant is exed su iently so as to
make all movements visible. For example, if one wants to know the mass of
a physi al obje t, one must to move the obje t, for example lift it. If one only
pulls on it without moving it, one an only state that the mass is bigger than
what one applied in for e during the un-su essful experiment.

log(|g|)

log(ω)

Figure 6.4: The hoi e of frequen ies is essential for the identi-
ation experiment

This on ept applies dire tly to the plant identi ation problem: if one wants
to obtain information about the plant in a ertain time s ale, then one needs
to ex ite it in that time s ale, whi h dire tly translate into applying a ertain
frequen y as an ex itation input. This an be ni ely demonstrated in the ase
6.7. THE EXCITATION SIGNAL 131

shown in Figure 6.4. It shows a Bode amplitude plot. The bla k line represents
the behaviour of the true plant. A rst-order model is being t, whi h has two
parameters, the gain and the time onstant. One thus needs at least two inde-
pendent experiments that provide the information ne essary to nd estimates
for the two parameters. In the red ase, the two sinusoidals indi ated by the
two dotted red lines are being used and in the green ase it is the orrespond-
ing green dotted lines that indi ate the input signals. The result is obviously
dierent. In the red ase the low-frequen y behaviour is aptured whilst in the
green ase more of the high-frequen y behaviour is ree ted into the model.
The literature is ri h on dis ussions and suggestions of what type of input signals
should or an be used. In many ases people aim at identifying a plant as a kind
of whole, meaning that they do not think in time s ales and thus hierar hi al
models. If one nds a split in the time s ale, it is almost always feasible to work
with two models, one for the high-dynami range and one for the low-dynami
range. For example, it is quite thinkable that the red model is used to des ribe
the plant in Figure 6.4 in the low-frequen y range whilst in the high-frequen y
range the green model is used.
If one indeed aims at identifying the whole plant, one must provide a model
that is able to apture the behaviour, thus is ri h enough. Having su h a model,
one then must ex ite the plant persistently, meaning with a signal that is ri h
enough in frequen y ontents. For more details on the denition of persistant
see for example Ljung (1987); Eykho (1974), whi h also in lude referen es to
work on this subje t.
Obviously one of the simple solutions for the latter problem is to use all frequen-
ies, for example a random signal. Sin e this may not be trivial to apply, one
often uses signals that are oming lose, su h as random binary signals. Adding
a variation in the amplitude gives multi-level random signals.
From the pra ti al point of view, one should also keep the signal-noise ratio in
mind. If one applies for example a random binary signal, the energy one has
usually available in a limited amount, is spread over all the frequen ies equally,
at least ideally. In any ase it is spread making the signal of the individual
frequen y less strong and thus more likely to be  overed by noise omponents
a ting on the equipment, for example the measurement devi es. It is thus often
better to apply a frequen y or a sele ted set of frequen ies. In any ase though,
one should keep in mind in what time-s ale the model will be used and thus to
what detail the pro ess should or must be des ribed.

6.7.1 Design of Experiments


Models are onstru ted based on the available knowledge. If the nature of the
plant is known in details, one may de ide on onstru ting a me hanisti model
su h as it was dis ussed in the earlier hapters Chapter 2. I this is not the
ase, one has to resort to empiri al models, whi h ree t the plant's behaviour
in a fun tional form that the person modelling the pro ess believes ts the
behaviour of the plant best. It is ustomary to label the rst type of models
as white-box models, whilst the latter are alled bla k-box models. What was
132 CHAPTER 6. SYSTEM IDENTIFICATION

dis ussed in the earlier hapters is typi ally a mix of the two. The foundation
is usually me hanisti and the more one gets into the internal details, what is
often referred to as the onstitutive equations, one has less and less information
about the basi nature and bla k-box models are being used sele ted on expe-
rien e: The onservation on epts of physi s are onsidered as me hanisti and
so are large parts of the ma ros opi theory-based des ription of the hydrauli s
of a plant. As one gets into the details of transport and in into the des rip-
tion of material properties and rea tions, the understanding of the underlying
pro esses be omes thinner and thinner or more and more involved so that one
usually has to resort to essentially empiri al models. Often some remainders
of the underlying on epts are preserved ree ting into the fun tional form the
empiri al model takes.
Asking the following three questions leads stepwise to a pro ess model:

• What ae ts the plant?  s reening experiments aim at providing


rudimentary information about the input/output behaviour of the plant.

• How do Inputs Ae ts the Plant?  Response-surfa e Method


usually uses simple models, often polynomial models to des ribe the steady-
state input-output behaviour of the plant, sear hing then for the optimum
in the approximate spa e.

• Why does the plant do what it is doing?  Me hanisti model


are the only models that explain the internal behaviour of the plant.

6.7.1.1 Single Blo k Design

Some of the hara teristi s are not available through dedu tive studies and must
be identied using pro ess identi ation te hniques. The experiments are to be
designed to provide most information as possible. Design of experiments has its
roots in the statisti al literature whi h refers to the inputs or stimuli signals as
fa tors(Box et al., 1978). The most e ient way of arranging experiments is in
blo ks meaning a set of experiments whi h modies the input levels systemati-
ally. Potentially to ea h input a step is being applied. One waits long enough
to get su iently lose to the steady state value of the observation, whi h im-
plies that one has to wait for at least 5 times the maximal time onstant in the
plant. The input levels of the inputs are hanged su h to form an orthogonal
plan. If the model is nonlinear in the inputs this statement is slightly modied
in that it is the fun tion f (u) that is the obje t of an orthogonal design.
Let f o (uo ) be

F(u) := Fo (uo ) + diag[∆f (∆u)] ST , (6.164)

the entries ∆f (∆u) in the diagonal matrix being positive. A plan matrix S lists
all ombination of +1 and −1. For example for a 3-input systems, one gets the
6.7. THE EXCITATION SIGNAL 133

S matrix:
 
+1 +1 +1
 
 

 −1 +1 +1 

 
 

 +1 −1 +1 

 
−1 −1 +1
 
(6.165)
 
S := 

.


 −1 +1 −1 

 
−1 +1 −1
 
 
 
 

 +1 −1 −1 

 
−1 −1 −1

Where the rows are the experiments and the olumns indi ate the input vari-
ation. The arrangement hosen here is alled the standard form due to Yates
(Box et al., 1978)8
Let
D := diag[∆f (∆u)] . (6.166)

Thus more ompa tly we write:

F := Fo + D ST (6.167)

Substituting the fun tion f in Equation (6.1) one gets


 T
y := Fo + D ST Θ. (6.168)

Performing experiments as the entre, averaging them and subtra ting them
from the measurement obtained when exe uting the plan, one gets
 T
y − ȳ0 := D ST Θ, (6.169)
:= S D Θ . (6.170)

The least-squares estimator Equation (6.42) is then:


 −1
ST S ST y − ȳ0 (6.171)

DΘ :=
 −1
:= D−1 ST S ST y − ȳ0 (6.172)

Θ

The ST S is orthogonal and thus makes the regression analysis extremely simple.
8 Box, Hunter and Hunter (Box et al., 1978) explain the details of the algorithm in hapter
10.
134 CHAPTER 6. SYSTEM IDENTIFICATION

6.7.1.2 Handling Additive Noise

The ee t of measurement noise an be redu ed by repli ation of individual


experiments. This redu es the variations by the square root of the number of
identi al experiments assuming the noise is stationary. Assuming the varian e
is σ 2 and one performs n experiments getting n responses {yi }i:1...n , then mean
is
n
1X
ȳ := yi (6.173)
n 1

The varian e is then


n
1 X
var (y) := var (yi ) (6.174)
n2 1
1
:= n σ2 (6.175)
n2
σ2
:= (6.176)
n

In praxis su h repeated experiments are often di ult to perform as a lot of


other disturban es ae t the plant. Not at least the person or instrument
performing the experiments.

6.7.1.3 Redu ing Trends

6.7.1.3.1 Randomising

Trends aused by orrelation of the inputs an be redu ed by randomsing


the experiments. In this pro ess one introdu es a random variable sele ting the
experiments. For example randomizing the experimental plant Equation (6.165)
one would randomly mix the olumns ea h of whi h represent an individual
experiments.

6.7.1.3.2 Blo k Designs

Trends an be also be redu ed in the ase when one deals with similar items /
plants, whi h are to analysed modeled ex iting the orresponding inputs. For
example if one has two dierent pair of shoes and wants to know their ability to
absorb a dye and grease, the shoes are the items / plants and will be blo ked.
The experiments are then to apply grease / dye respe tively in ombinations. An
experimental plan is then established for ea h blo k, whi h in turn is randomized
as dis ussed in Se tion 6.7.1.3.1.

6.7.2 Optimal Designs


The optimal experiment design builds on information theory, be ause for an un-
biased estimator the estimator varian e is related to Fisher information matrix:
Minimizing the varian e orresponds to maximizing the information.
6.7. THE EXCITATION SIGNAL 135

There exist several riteria of optimality. The traditional ones build on the
invariants of the Fisher information matrix M in luding:

A (average) - optimality: min tra e(M−1 )


136 CHAPTER 6. SYSTEM IDENTIFICATION
Chapter 7

Appendix: Mathemati al
Components

7.1 Linear Algebra


A olumn ve tor assuming real numbers:
 
 x1 
 
x2 
 
∈ Rn (7.1)

x := 
 .. 
 [xi ]i:=1,2,...,n

 . 

 
xn

The transposed is a row ve tor:


 
xT := x1 x2 ... xn (7.2)

A matrix:
 
 a1,1 a1,2 ... a1,m 
 
 a2,1 a2,2 ... a2,m 
 
A :=  .

.. .. .. 
 (7.3)
 .. . . . 
 
 
an,1 an,2 ... an,m
:= [ai,j ]i:=1,...,n,j:=1,...,m ∈ Rn×m (7.4)

137
138 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

The transposed matrix:


 
 a1,1 a2,1 ... an,1 
 
 a1,2 a2,2 ... an,2 
 
A :=  .

.. .. .. 
 (7.5)
 .. . . . 
 
 
a1,m a2,m ... an,m
:= [aj,i ]j:=1,...,m,i:=1,...,n ∈ Rm×n (7.6)

Inner produ t:
X
xT y := xi yi (7.7)
∀i

Outer produ t:
x yT := [xi yi ]∀i,∀j (7.8)

Matrix - ve tor produ t:


 
X
b := A x :=  ai,j yj  (7.9)
∀j
∀i

Matrix produ t:
 
X
C := A B :=  ai,j bj,k  (7.10)
∀j
∀i,∀k

Inverse:
A−1 := |A|−1 adj(A) (7.11)

Example 2 x 2 matrix:
 
 a1,1 a1,2 
A :=   (7.12)
a2,1 a2,2

|A| := a1,1 a2,2 − a1,2 a2,1 (7.13)

 
 a2,2 −a1,2 
adj(A) :=   (7.14)
−a2,1 a1,1
(7.15)
7.2. ANALYSIS 139

7.1.1 Matrix dierentiation


Let y be a ve tor fun tion of x : y(x) and Q be a symmetri al matrix, then the
s alar quadrati form is:
 
d yT (x) Q y(x) dyT (x) dy(x)
= Q y(x) + yT (x) Q (7.16)
dxT dx dxT
T
dyT (x) dy(x)

= Q y(x) + yT (x) Q (7.17)
dx dxT
dyT (x) dyT (x) T
= Q y(x) + Q y(x) (7.18)
dx dx
dyT (x)
=2 Q y(x) (7.19)
dx

7.2 Analysis

7.2.1 Leibnitz' Rule

g(x)
∂ ∂g(x) ∂f (x)
Z
dx′ F (x, x′ ) := F (x, g(x)) − F (x, f (x))
∂x f (x) ∂x ∂x
g(x)

Z
+ dx′ F (x, x′ ) . (7.20)
f (x) ∂x

7.2.2 Taylor Expansion


The Taylor expansion approximates an arbitrary fun tion as a polynomial in
deviations in the nite perturbation of the variables of the fun tion around a
given point in the spa e spanned by the variables.

7.2.3 Euler's Theorem on Homogeneous Fun tions


Euler's Theorem of Homogeneous Fun tions 1. Let f (x1 , . . . , xk ) be a
fun tion su h that

f (λ x1 , . . . , λ xk ) := λn f (x1 , . . . , xk ) . (7.21)

then f is said to be a homogeneous fun tion of degree n for whi h

k
X ∂f (x1 , . . . , xk )
n f (x1 , . . . , xk ) := xi . (7.22)
i:=1
∂xi
140 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

Proof. Dierentiation of the homogeneous ondition with respe t to λ gives


d d n
f (λ x1 , . . . , λ xk ) := λ f (x1 , . . . , xk ) , (7.23)
dλ dλ
k
X ∂f (λ x1 , . . . , λ xk )
xi := n λn−1 λn f (x1 , . . . , xk ) . (7.24)
i:=1
∂λ xi

setting λ = 1, one obtains :


k
X ∂f (x1 , . . . , xk )
xi := n λn f (x1 , . . . , xk ) . (7.25)
i:=1
∂λ xi

7.2.4 Legendre Transformation Generating New Exten-


sive Properties
Let ϕ1 be a vetorial arbitrary extensive quantity, whi h is a fun tion of two
ve tors of extensive quantities ϕa and ϕb :

ϕ1 := ϕ1 (ϕa , ϕb ) . (7.26)

A new extensive quantity ϕ2 is dened:

ϕ2 := ϕ2 (ξ , ϕb ) , (7.27)
:= ϕ1 − ξ ϕa . (7.28)

The last equation an be interpreted as a tangent plane to the original fun tion
with slopes in the dierent dire tions, olle ted in the Ja obian:
∂ϕ1
ξ := , (7.29)
∂ϕTa

take the role of the new variables


Dierentiating the two equations (Equation (7.26)) and (Equation (7.28)) one
nds:
∂ϕ1 ∂ϕ1
dϕ1 := dϕa + dϕb , (7.30)
∂ϕa ∂ϕb
dϕ2 := dϕ1 − ϕTa dξ T − ξ dϕa . (7.31)

Elimination of dϕ1 gives:

∂ϕ1
dϕ2 := dϕb − ϕTa dξ T . (7.32)
∂ϕb

Applying the Legendre transformation to extensive quantities introdu es in-


tensive properties, olle ted in the Ja obian. The transformation an also be
inverted in whi h ase the respe tive roles are ex hanged.
7.2. ANALYSIS 141

7.2.5 Examples
The Legendre transformations are basi to thermodynami s. For example, let

ϕ1 := ϕ1 := U (S, V, n) , (7.33)
ϕ2 := ϕ2 := H(T, V, n) . (7.34)

and

ϕa := ϕa := S , (7.35)
T T
ϕb := [V, n ] . (7.36)

Thus

∂ϕ1 ∂U (S, V, n)
:= , (7.37)
∂ϕb ∂[V, nT ]
∂U (S, V, n)
:= [p, ], (7.38)
∂nT
∂U (S, V, n)
ξ := , (7.39)
∂S
:= T . (7.40)
142 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

7.3 Ve tor Analysis


7.3.1 S alar Fields and Ve tor Fields

Denition - S alar eld f : a mapping of points P := (x, y, z)


dened in a spatial domain D(f ) onto the set of numbers f (P ) =
f (x, y, z).

Examples: temperature, pressure as a fun tion of the spatial oordinates.

Denition - Ve tor eld v : a mapping of a spatial domain D(v)


onto a set of ve tors: a ve tor v(P ) is asso iated with every point
P := (x, y, z) ∈ D(v).

Examples: ow elds, for e elds

7.3.2 Dierential Operators


7.3.2.1 Gradient Operator

The operator gradient maps a s alar eld into a ve tor eld:


grad
s alar eld f −→ ve tor eld grad f

 
∂f ∂f ∂f
grad f := , , . (7.41)
∂x ∂y ∂z

7.3.2.2 Divergen e Operator

The operator divergen e maps a ve tor eld into a s alar eld:


div
ve tor eld v −→ s alar eld div v

∂v1 ∂v2 ∂v3


div v := + + . (7.42)
∂x ∂y ∂z

7.3.2.3 Curl Operator

The operator url maps a ve tor eld into another ve tor eld:
rot
ve tor eld v −→ ve tor eld rot v
 
∂v3 ∂v2 ∂v1 ∂v3 ∂v2 ∂v1
rot v := − , − , − , . (7.43)
∂y ∂z ∂z ∂x ∂x ∂y
7.3. VECTOR ANALYSIS 143

7.3.2.4 LaPla e Operator

The Lapla e operator △ maps a s alar eld into another s alar eld:

S alar eld f −→ s alar eld △ f

∂2f ∂2f ∂2f


△f := 2
+ 2 + 2. (7.44)
∂x ∂y ∂z

7.3.2.5 Nabla Operator

 
∂ ∂ ∂
∇ := , , . (7.45)
∂x ∂y ∂z

7.3.2.6 Relations

∇·v = div v , (7.46)


∇×∇ = rot v , (7.47)
∇f = grad f , (7.48)
∇2 f = △f , (7.49)
grad div v = rot rot v , (7.50)
rot grad f = 0, (7.51)
grad div v = rot rot v + △ v , (7.52)
div rot v = 0, (7.53)
rot grad f = grad div v − △ v . (7.54)

7.3.3 Flow
The ow through a surfa e S with the normal ve tor being n being lo ated in
a ow eld v is:
ZZ
fˆ := v · n dS . (7.55)
S

7.3.4 Divergen e Theorem by Gauss


Let the volume V having the surfa e S be lo ated in a ow eld v, then
ZZ ZZZ
v · n dS = div v dV . (7.56)
S V
144 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

7.4 Graph Theory


Graph is used for dierent obje ts. Probably most ommonly one refers to a
graphi al representation of data in the form of a plot, for example a x-y plot
or a pie- hart, a histogram et . The term graph is also used for a (graphi-
al) representation of a network onsisting of a set of nodes and onne tions.
Many problems an onvinently be represented by a diagram onsiting of nodes
(points, ir les or other graphi al entities) onne ted by lines or arrows thus
forming a network. For nodes one also used the term verti es and for the
onne tions one also uses frequently the term ar . Graphi ally this may ree t
into a set of points ( ir les, ellipses or any other obje t that visualises a body,
volume or system) and a set of lines, bars, arrows representing the onne tions.
Graphs are used in dierent ontents where networks are useful su h as Inter-
net representation, English thesaurus (des S ien es Cognitives), abstra t syntax
tree, et . A number of examples an be found on (various , AT&T), a webpage
that des ribes a graph visualisation tool. In the ontext of modelling we have
three major uses of the graph theory, namely as a graphi al representation of
the spa e taken by the plant, thus a representation of ontrol volumes and their
intera tion (Se tion 2.1.1), (Figure 2.1). In order to handle omplexity, that
is a very large graph, this is extended to a hierar hi al graph representation.
Graph theory is also a very handy tool to analyse equations and variables in the
form of a bi-partite graph. Mathemati al expressions an be mapped into trees
onsisting of variables and operators uses in omputer s ien e to store oded
expressions. These are known as abstra t syntax trees.
The theory of graphs is old and goes ba k to Euler publishing a paper on
the Seven Bridges of K'onigsberg and published in 1736, whi h is regarded as
the rst paper in the history of graph theory. The subje t being part of dis-
rete mathemati s is do umented in many text books ommonly available from
the library. There are also many resour es on the web des ribing the subje t
(Wikipedia), (MathWorld). The exposition here is thus only a summary of the
material being used in the urrent ontext.

7.4.1 Basi s of Graph Theory


Graph, verti e, node & in iden e fun tion : A graph is a triple G :=
{V (G), E(G), fG } with V (G) being a set of verti es (nodes) and E(G)
and a set of edges (ar s) and an in iden e fun tion fG that asso iates
with ea h edge of G an unordered, not ne essarily distin t, pair of verti es
of G.
Joint : If e is an edge and u and v are two verti es su h that f (e) = (u, v)
then e is said to join the verti e u with the verti e v .
Ends : The two verti es u, v are alled the ends of the edge e.
Conne ted : Two verti es that are onne ted by an edge are alled adja ent.

Example :
7.4. GRAPH THEORY 145

b
B F
a
A d e
i
C E g D f
h

Figure 7.1: An example graph G

The example graph G Figure 7.1 onsists of set of verti es


and edges:

V (G) := {A, B, C, D, E, F } ,
E(G) := {a, b, c, d, e, f, g, h, i} .
ν(G) := |V (G)| = 6 .
ǫ(G) := |E(G)| = 9 .

The in iden e fun tion is then:

fE := { (A, B), (B, F ), (A, C), (B, C), (E, F ),


(D, D), (D, E), (C, E), (E, C) }.

Ends : An edge with two distin t ends is alled a link.


In ident : The ends of an edge are said to be in ident with the edge, and
vi a versa.
Multiple edge : If there are more than one edge have the same end, the
edge is alled multiple edge.
Loop : An edge that has that onne ts to the same verti e on both ends is
alled a loop

E g D B F
link a
A e
i
C E
h D f C E g D f
multiple edge
not simple loop simple not simple

Figure 7.2: Basi graph stru tures

Isomorphi graphs : Two graphs G and H are alled isomorphi G ≅ H


if there exists bije tions ϕ : V (G) → V (H) and Φ : E(G) → E(H) su h
that fG (e) = (u, v) if and only if fH (Φ(e)) = (ϕ(u), ϕ(u)). Su h a pair
146 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

(ϕ, P hi) of mappings is alled an isomorphism between G and H . In


layman terms: the stru ture of the two graphs is the same, whilst the
edges and the verti es are labelled dierently.

h
j i
b
A B 5 1
m n
l 2
a C e f 4
d k
g
D E F 3

Figure 7.3: Two isomorphi graphs with the mappings: A = 6, B =


5, C = 4, D = 3, E = 2, F = 1 and a = h, b = i, c = j, d = k, e =
k, f = l, g = m

Simple graph : A graph is simple if it has no loops and no two of its links
join the same pair of verti es.
Complete graph : If ea h pair of distin t verti es is joined by an edge is
alled a omplete graph. Not onsidering isomorphism, there is only one
omplete graph with n verti es, whi h is denoted by Kn .
Empty graph : A graph is empty if it ontains no edges.
Finite graph : A graph is nite if both its vertex set and the edge set are
nite.
Trivial graph : A graph with only one vertex is alled trivial , all others
non-trivial.
Identi al graphs : Two graphs G and H are alled identi al if V (G) = V (H),
E(G) = E(H) and f (G) = f (H)
Bipartite graph : A bipartite graph is one whose vertex set an be partitioned
into two subsets X and Y , so that ea h edge has one end in X and the
other end in Y . The partition of a graph's verti e V (G) = (X(G), Y (G)
is alled a bipartition of the graph G.
Complete bipartite graph : Is a graph, whi h is a simple bipartite graph,
with the bipartition X and Y in whi h ea h vertex of Y is joined to ea h
vertex of Y . If m, n denote the ardinality of the two sets X and Y ,
respe tively, then the graph is denoted by Km,n . This on ept an be
extended to k -partitioned graphs.
In iden e matrix : Any graph an be represented in a ν × ǫ matrix, where
ν := |V (G)| and ǫ := |E(G)|. The in iden e matrix of G is the matrix
M(G) := [mi,j ] with mi,j being the number of times that a verti e vi and
an edge ej are in ident.
7.4. GRAPH THEORY 147

X Y
A a

B b

D f

Figure 7.4: A bipartite graph showing the two sets on the left and
the right. The shown graph is also omplete.

Adja en y matrix : The adje an y matrix of the graph G is the ν × ν


matrix A(G) := [ai,j ] in whi h ai,j is the number of edges joining vi and
vj

Example : For our example above the in iden e matrix is:

a b c d e f g h

A −1 −1

B +1 −1 −1
M := C +1 +1 −1

D ±1 −1

E −1 +1 +1

F +1 +1

and the dja en y matrix of the dire ted graph:

A B C D E F

A 1 1

B 1 1
A := C 1

D 1 1

E 1

with the rows being the sour e nodes and the olumn the
sink nodes.
148 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

Let U(A), D(A) be the upper triangular and the diagonal


matrix extra ted from A then the adja en y matrix for the
un-dire ted graph is given by U + D + UT , whi h is sym-
metri al.

Let G1 and G1 be two non-empthy graphs and dene the union as a new graph:
G := (V (G1 ) ∪ V (G2 ), E(G1 ) ∪ E(G2 ))
Subgraph : G1 and G2 are subgraphs of G.
Subergraph : G is a supergraph to G1 and G2
Disjoint : If G1 ∩G2 := ∅ this being short for (V (G1 ) ∩ V (G2 ), E(G1 ) ∩ E(G2 )) =
(∅, ∅) then the two graphs are disjoint.
Indu ed subgraph : The subgraph G1 := G[V1 ] is alled an indu ed subgraph
of G if E1 is the subset of E that have both ends in V1 .
Edge-indu ed subgraph : The subgraph G1 := G[E1 ] is alled an edge-
indu ed subgraph of G if V1 is the subset of V with the set of ends E1 .
Underlying simple graph : The spanning subgraph of G is obtained by
deleting all loop and redu e all multiple edges single edges.
Spanning subgraph : Is a subgraph with identi al verti e sets. H is a
spanning subgraph of H if V (H) = V (G).
Walk : A walk W in the graph G is a nite non-null sequen e of alternating
verti es.
origin, terminus : lines starting with the verti e v0 , alled the origin, and
ending with verti e vk , alled the terminus, thus W := v0 e1 v1 e2 . . . ek vk .
Ends : W is alled a trail if the edges are distin t.
Path : If in addition the verti es are distin t, the walk is alled a path.
Closed walk : A walk is losed if it is of positive length and the origin and
terminus are identi al.
Cy le : A y le is a losed walk whi h has passes through distin t verti es.
Conne ted graph : Two subgraphs G[Va ] and G[Vb ] are onne ted if there
exists at least one verti e having ends in ea h of the sets Va and Vb .
Otherwise the graph is alled dis onne ted and the two subgraphs G[Va ]
and G[Vb ] are alled omponents of G.
7.5. SINGULAR PERTURBATION  AN INTRODUCTION 149

A
a b walk : A-b-C- -B-d-D-d-B-a-A-b-C
trail: A-b-C- -B-d-D-e-E-g-B

B C path: A-a-B-d-D-e-E-f-C
losed walk: A-a-B-d-D-e-E-g-B- -C-b-A
d g f
y le: A-a-B-g-E-f-C-b-A
D E
e

Figure 7.5: A general walk and spe ial walks: trail, path, losed
walk and y le

A A
a b a b

B C B C
d g f
D E D E
e e

Figure 7.6: Conne ted and dis onne ted graphs, omponents

7.5 Singular Perturbation  An Introdu tion


In s ien e and engineering one nds often problems where two systems of largely
dierent nature are oupled. One of the sets of the equations des ribes the
main system where the se ond des ribes the small system. Often the ee ts
of the small systems may be ignored, but very often too, the small system makes
all the dieren e. Flow systems are typi ally of this nature in that the boundary
layer is very important when des ribing ee ts su h as lift aused by ow over
a prole as it is used in the onstru tion of a wing. The on ept, though, may
also be applied to time s ales su h as fast and slow systems. Relevant readings
are :

• Generi (Lin and Segel, 1988)

• Control (Kokotovi et al., 1976; Saksena et al., 1984)

7.5.1 An Illustrative Example


For this very simple exposition to singular perturbation, let us dene a simple
time- onstant linear system whi h des ribes a system onsisting of a slow, main
150 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

subsystem and a fast se ond subsystem both being intimately oupled together:

ẋ = A11 x + A12 z; x(0) := x0 (7.57)


ε ż = A21 x + A22 z; z(0) := z 0
(7.58)
y := C1 x + C2 z (7.59)

7.5.1.1 The Outer Solution

First we assume that the rst equation dominates and set the small number
ε := 0, thus a pseudo-steady-state assumption is made for the se ond equation.
This yields what is alled the outer solution :

A21 xo + A22 zo = 0 (7.60)

thus
zo := −A−1
22
A21 xo (7.61)

Using the result in the rst matrix equations yields step-wise the outer solution

ẋo = A11 xo + A12 (−A−1


22
A21 ) xo (7.62)
= (A11 − A12 A−1
22
A21 ) xo (7.63)
= S xo (7.64)
(7.65)

Integration results in the simple solution

xo (t) = eS t x0 (7.66)

The output for the outer solution (indi ated by a subs ript o) is then

yo (t) := C1 xo (t) + C2 (−A−1


22
A21 ) xo (t) (7.67)
:= (C1 − C2 A−1
22
A21 ) xo (t) (7.68)
St
:= (C1 − C2 A−1
22
A21 ) e x 0
(7.69)

The outer solution is thus a simple exponential as it was probably expe ted.
This outer solution des ribes the system approximately in the large time s ale,
but what about the small time s ale, parti ularly at the beginning of a hange ?

7.5.1.2 The Inner Solution

This solution, alled the inner solution, is onstru ted by s aling. In this ase
the s aling is done in the time s ale. Let

τ := t/ε (7.70)
7.5. SINGULAR PERTURBATION  AN INTRODUCTION 151

Then
dxi
ε−1 = A11 xi + A12 zi (7.71)

dxi
= ε (A11 xi + A12 zi ) (7.72)

dxi
≈0 → xi (τ ) :≈ x0 (7.73)

dz
ε ε−1 i = A21 xi + A22 zi (7.74)
dτ Z τ

A τ 0
zi (τ ) = e 22 z + eA22 (t−t ) A21 x0 dt′ (7.75)
0
τ
A τ 0 A t′
= e 22 z + A−122
e 22 A21 x0 dt′ (7.76)
t′ :=0
A τ A τ
=e 22z0 + A−1 (e 22 − I) A21 x0 (7.77)
22 
A τ A τ
yi (t) := C1 x0 + C2 e 22 z0 + A−122
(e 22 − I) A 21
x0
(7.78)

7.5.1.3 Combining the Outer and the Inner Solution

Having the outer and the inner solution available, a ombined solution may be
onstru ted by adding the two solutions together and subtra ting the ommon
parts of the two. For a s alar observation one ould suggest:
yc (t) := yo (t) + yi (t) − c(t) (7.79)
where the last term represents the ommon part of the two solutions. In this
ase, this ommon part is extremely simple as it is just a onstant whi h an be
found easily by analysing the end value :
yc (t → large) = yo (t → large) (7.80)
⇒ yi (t → large) = c(t) (7.81)
Thus
lim yi (t) = C1 x0 + C2 A−1
22
(−I) A21 x0 (7.82)
t→∞
= (C1 − C2 A−1
22
A21 ) x0 (7.83)

7.5.1.4 Example

The atta hed gures show the simulation results for a system :

A11 := −5 A12 := 1

A21 := 1 A22 := −1

C1 := 1 C2 := 1 (7.84)

x0 := 10 z0 := 5

ε := 0.01
152 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

Singular Perturbation : An Example using Matching


20

18 outer solution
inner solution
approximate solution
16 exact solution

14

output 12

10

2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
time

Figure 7.7: Inner solution, outer solution, ombined approximate solution om-
pared with the exa t solution

7.5.2 Simple form of Tihomov's Theorem


Given a set of k rst-order ordinary dierential equations, let us assume that
the set an be split into two subsets one whi h is the fast subsystem having a
small apa ity parameter ǫ and a slow subsystem:

dxs
slow = f s (xs , xf ) , (7.85)
dt
dxf
fast ǫ = f f (xs , xf ) . (7.86)
dt
In the literature, the slow system is alled the degenerate system and the fast is
alled the adjoined system.

Tihomov Theorem: Existen e of solution for singularly perturbed


system 1. The solution of the system Equation (7.85), Equation (7.86) tends
to the solution of the degenerate system when ǫ → 0 if the following onditions
are fullled:

• The solution z̄ := ϕ(xs , xf ) is an isolated root of the algebrai system


0 := f f (xs , xf ) i.e. in the small neighbourhood of this root there are no
other roots.

• The solution xs is a stable isolated singular point of the adjoined system


for all values of xs .

• The initial values xos are in the domain of inuen e of the stable singular
point of the adjointed system, i.e. the system will evolve to the isolated
solution z̄
7.6. INDEX OF DIFFERENTIAL ALGEBRAIC EQUATIONS 153

Singular Perturbation : An Example using Matching


0.3
error

0.25

0.2
error

0.15

0.1

0.05

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
time

Figure 7.8: Error := exa t solution - ombined approximate solution

• The solutions of the system Equation (7.85), Equation (7.86) are unique.

• The right-hand-sides of the two sets of dierential equations are ontinu-


ous.

For a more detailed exposition see (Vasileva Adelaida B, 1995) and (Kokotovi et al.,
1999)

7.6 Index of Dierential Algebrai Equations


Dening a general DAE:

0 := f (x, ẋ, t) . (7.87)

the index (dierential index) k of the (non)linear, su iently smooth DAE is
the smallest k su h that

0 := f (x, ẋ, t) , (7.88)


d
0 := f (x, ẋ, t) , (7.89)
dt
......
... (7.90)
k
d
0 := f (x, ẋ, t) . (7.91)
dtk

uniquely determines ẋ as a ontinuous fun tion of x and t.


154 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

7.7 Optimisation
7.7.1 General Problem

min F (x)
x∈Rn

subje t to: ci (x) = 0, i := 1, 2, . . . , e


ci (x) ≥ 0, i := e + 1, . . . , m

Feasible point z: satises all onstraints.

Feasible region: R := {zi |∀i}

Infeasible problem: R := 0

Optimal point: x∗

δ - Neighbourhood of x: N (x, δ)

Denition - lo al minimum : The point x∗ is a lo al minimum of


the general onstraint optimisation problem if ∃δ > 0 su h that:

1. F (x) is dened on N (x∗ , δ) and

2. F (x∗ ) < F (y) ∀ y ∈ N (x∗ , δ), y 6= x∗

The fun tion F (x) is smooth and at least twi e- ontinuously dierentiable.

7.7.2 Un onstraint Optimisation


7.7.2.1 One-Dimensional

The problem redu es to:

min f (x)
x∈R1

Denition - Ne essary onditions :


∂f (x)
1. ∂x ∗ := fx (x∗ ) := 0
x
7.7. OPTIMISATION 155

∂ 2 f (x)
2. ∂x2 x∗ := fxx (x∗ ) ≥ 0

To proof the above onditions expand the fun tion f (x) in a Taylor series about
the optimal point:
1 2
f (x∗ + ǫ) := f (x∗ ) + ǫ fxx (x∗ ) . (7.92)
2
156 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

7.8 Elements of Statisti s


7.8.1 Probabilty
7.8.1.1 Axiomati Denition

Denition - Event Spa e E : An event spa e is alled a probability


spa e or a probability eld if every set of observed events A has a
probability P (A)

Denition - Probability Axioms :

1. Probability is a real number ∈ [0, 1]


2. Impossible implies P (∅) := 0 and
ertain implies P (E) := 1
3. For disjun t, mutually ex lusive events A and B :
P (A + B) := P (A) + P (B)
4. For non-mutually ex lusive events A and B :
P (A + B) := P (A) + P (B) − P (AB)

Denition - Conditional Probability : The onditional probabil-


ity of A given B , that is event B has already o urred and P (B) 6= 0, is

P (AB)
P (A|B) =
P (B)

Denition - Random Variable : a fun tion of an event spa e.

Probability of a random variable to assume a value between a and b is given by


Z b Z a
P (x ∈ [a, b]) := p(x) dx − p(x) dx , (7.93)
0 0

where p(x) is the probability density fun tion hara terising the ontinuous
random variable x. If the random variable is dis rete, the integral is repla ed
by orresponding summations.

7.8.1.2 Bayes' Theorem

Theorem 9 (Bayes' Theorem). Let {Ai } be a disjun t set of observed events


and B an observed event, then for ea h j:
P (Aj B)
P (Aj |B) = (7.94)
P (B)
P (A ) P (B|Aj )
= Pn j (7.95)
1 P (Ai ) P (B|Ai ))
7.8. ELEMENTS OF STATISTICS 157

The theorem is often used with Aj denoting a statement about an unknown phe-
nomenon, whilst B presents the known information about the pro ess. P (Aj )
is denoted as prior probability, P (Aj |B) as posterior probability and P (B|Aj )
as likelihood.

7.8.1.3 Distribution Measures

Denition - Modus : the maximum of the probability distribution


fun tion.

Denition - Median : lo ation where the umulative distribution


fun tion is 1/2.

The most important measures are the mean

x̄ := E [xi ] (7.96)
X
:= xi p(xi ) x :: dis rete (7.97)
i
x̄ := E [x] (7.98)
Z +∞
:= x p(x)dx x :: ontinuous (7.99)
−∞

and the varian e


h i
2
var (x) := E (xi − E [xi ]) (7.100)

(xi − x̄)2 p(xi ) x :: dis rete


X
:= (7.101)
i
h i
var (x) := E (x − E [x])2 (7.102)
Z +∞
2
:= (x − x̄) p(x) dx x :: ontinuous (7.103)
−∞

The entral moments are dened as


h i
µk := E (xi − E [xi ])k (7.104)
k
X
:= (xi − x̄) p(xi ) x :: dis rete (7.105)
i
h i
k
µk := E (x − E [x]) (7.106)
Z +∞
k
:= (x − x̄) p(x) dx x :: ontinuous (7.107)
−∞
158 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

The moments are dened as


h i
k
µ′k := E(xi − E [xi ]) (7.108)
X
:= xki p(xi ) x :: dis rete (7.109)
i
h i
µ′k := E (x − E [x])k (7.110)
Z +∞
:= xk p(x) dx x :: ontinuous (7.111)
−∞

The se ond entral moment is the varian e. The third entral moment is alled
skewness and the fourth is alled kurtiosis.

7.8.1.3.1 Behaviour of Moments

Let x, y, z ∈ E three random variables on the same probability spa e and a, b, c, d


arbitrary onstants.

E [a x + b] := a E [x] + b (7.112)
E [x + y] := E [x] + E [y] (7.113)
E [x y] := E [x] E [y] if x and y are un orrelated (7.114)

h i
var (x) := E (x − E [x])2 (7.115)
2
(7.116)
 2
:= E x − (E [x])
≥ 0 (7.117)
= 0 for x := onst (7.118)
var (a x + b) := a2 var (x) (7.119)
var (x + y) := var (x) + var (y) (7.120)
if x and y are independent
ov (x, y) := E [(x − E [x]) (y − E [y])] (7.121)
:= E [(x y − x E [y] − E [x] y + E [x] E [y])](7.122)
:= E [x y] − E [x] E [y] (7.123)
ov (x, y)
ρ (x, y) := p (7.124)
var (x) var (y)
ov (x, y)2 ≤var (x) var (y) (7.125)
= 1 if x and y on straight line (7.126)
= 0 if x and y are independent (7.127)
ov (a x + b, c y + d) := a c ov (x, y) (7.128)
ov (x + y, z) := ov (x, z) + ov (y, z) (7.129)
7.8. ELEMENTS OF STATISTICS 159

7.8.1.3.2 Some Follow-Ups

Given xi have all the same expe tation value:


n
" #
1 X
E [x̄] := E xi (7.130)
n i:=1

Given x and y are independent


var (x − y) := var (x) + var (y) (7.131)

Given xi are un orrelated and have the same varian e


n
!
X
var xi := n var (() x1 ) (7.132)
i:=1
n
!!
1 X
var (x̄) := var var xi (7.133)
n i:=1
1
:= var (x1 ) (7.134)
n

Also
E [x] := E [E [x|y]] (7.135)
var (x) := E [var (x|y)] + var (E [x|y]) (7.136)

7.8.2 Most Common Distribution Fun tions


7.8.2.1 Binomial Distribution

Number of su esses in n independent events with probability p:


 
n k n−k
P (x = k) := p (1 − p) , k := 0, 1, . . . , n (7.137)
k
E [x] := n p (7.138)
var (x) := n p (1 − p) = n p q (7.139)

7.8.2.2 Poisson Distribution

Number of rare events with the expe tation of λ:


λk
P (x = k) := e−λ k := 0, 1, 2, . . . , n (7.140)
k!
E [x] := λ (7.141)
var (x) := λ (7.142)

Poisson distribution is the limit of the binomial distribution with


p → 0, n → ∞, n p → λ.
160 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS

7.8.2.3 Normal Distribution

Idealised distribution of measurement errors and approximation for many other


distributions. The probability distribution of the standard normal distribution
for x ∈ (−∞, +∞) := N (0, 1) is given by:

1 1 (x−µ)
2
p(x) := √ e 2 σ2 (7.143)
2 πσ
E [x] := µ (7.144)
var (x) := σ 2 (7.145)

7.8.2.4 Exponential Distribution

Des ribes pro esses without memory.

x
(
1
e− µ x≥0
p(x) := µ
(7.146)
0 else
E [x] := µ (7.147)
var (x) := σ 2 (7.148)

7.8.2.5 Uniform Distribution

Mostly used as initial ondition in re ursive pro esses and random number gen-
eration whi h thereafter are transformed.
(
1
a<x<b
p(x) := b−a (7.149)
0 else
a+b
E [x] := (7.150)
2
b−a
var (x) := (7.151)
12

7.8.3 Essential Statisti s


7.8.3.1 Chi-Square Distribution

Distribution of the sum of squares of ν independent, standard-normal-distributed


random variables xi ∼ N (0, 1):
X
ν x2i ∼ χ2ν (7.152)
i:=1
E [x] := ν (7.153)
var (x) := 2 ν (7.154)
7.8. ELEMENTS OF STATISTICS 161

• The distribution is ontinuous in (0, ∞)g.


• For ν := 2 it is the exponential distribution with µ := 2
• Approa hes the normal distribution for large ν .
• ν :: integer.

7.8.3.2 Student t Distribution

Distribution of the standardized arithmeti average (x̄ − µ)/sx̄ over n := ν + 1


indebendent normally distributed xi , where the sx̄ is the empiri al standard
deviation of the average:
n
X (xi − x̄)2
s2x̄ := (7.155)
i:=1
n (n − 1)

More general: Let x and y be independent, x ∼ N (0, 1) and y ∼ χ2ν then:


x
p y ∼ tν (7.156)
ν

• The distribution is ontinuous in (0, ∞)g.


• Symmetri al around 0.
• Bell-shaped
• Longer tails than the normal distribution
• Approa hes the normal distribution for large ν .
• ν :: integer.

Mostly used to test an average or to ompare two averages.

7.8.3.3 F-Distribution

Distribution of the quotients of two independent estimates of the same varian e


starting with a normally distributed variable. More generally: Let x and y be
independent and x ∼ χ2νx and y ∼ χ2νy then:
x/νx
∼ Fνx ,νy (7.157)
y/νy

• The distribution is ontinuous in (0, ∞)g.


• Approa hes the normal distribution for large νx and νy with a mean of 1
and a varian e of 2/νx + 2/νy .
• νx , νy :: integer.

Most ommon use: Varian e analysis.


162 CHAPTER 7. APPENDIX: MATHEMATICAL COMPONENTS
Chapter 8

Things to Know

8.1 Basi s on Rea tions


8.1.1 Stoi hiometry
Given a rea tion system:

A+B ⇒ C (8.1)
2B + C ⇒ D (8.2)
2A ⇒ F + G (8.3)
G+B ⇒ E (8.4)

the stoi hiometri matrix is:

A := A B C D E F G (8.5)
 
 -1 -1 1 
 
-2 -1 1
 
(8.6)
 
N := 




 -2 1 1 

 
-1 1 -1

163
164 CHAPTER 8. THINGS TO KNOW
Chapter 9

Examples, Exer ises, Answers

9.1 Pro esses


9.1.1 Simple Pro esses
9.1.1.1 Topology Exer ises

• A stirred tank with a ja ket


• Same but assume that the heat ex hanger onsists of a half-pipe welded
on the outside of the tank.
• A single pipe heat ex hanger onsisting of an inner and an outer pipe,
thus just two pipes in ea h other.
• A tray of a distillation olumn.
• A distillation
• A very simple model of a heat ex hanger for example boiler or ondenser.
• For the fun: how about a hi ken oop (a house where you keep hi ken)
and how about the hi ken...
• Kettle, a hot water heater you may have in your kit hen. The kettle is of
the type with the heating plate on the bottom.
Purpose: swit hing it o when water is boiling.
• Transporting fruit in ontainers is a non-trivial problem as the fruit must
breathe and ripens or rots, thus undergoes hemi al hanges, whi h are
asso iated with a thermal ee t. Latter an be quite signi ant. Think
about a hay sta k, for example.
Purpose: Dynami s of the hanges in the fruit's quality.
• Hot glass of water.
Purpose 1: Dynami s of ooling down.
Purpose 2: Loosing mass.

165
166 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

• Hot glass of water overed with a lid.


Purpose as above.

• Pie e of butter melting in the pan.


Purpose: how long does it take to melt it.

• A toilet

• A oee ma hine

• Tube in a power plant boiler

9.1.1.2 Temperature Sensor

Having a temperature sensor in a material, say a uid, is a ommon pro ess.


Having the temperature available as a part of observing the plant's behaviour,
namely reading the temperature of the uid is a ommon thing to do. But
does one really read the temperature of the uid? Assuming that we do not
dis uss the issue what temperature is and what we a tually measure, but we
agree of knowing what temperature is and we simply have a devi e that gives us
a measure of the temperature, we are still left with the apa ity and transport
pro esses asso iated with reading the temperature of the uid.
Thus let us assume we have a temperature sensor in a liquid and let us dene
the task of modelling the joint system of uid and temperature sensor, then
we may want to take the view of the temperature sensor to be of uniform
temperature, thus we lump the material that makes up the sensor itself into a
simple lumped system of uniform with the intensive properties being uniform
within the system. Further let us assume that the uid in whi h the sensor is
immersed an be viewed as onsisting of a bulk with uniform temperature and
a uniform uid lm around it that a ts as the heat-transfer system between the
bulk of the uid and the sensor. Pi torially, this maps into the following graph:

q̂E|S

bulk E S
ŵS|E

Figure 9.1: A rst abstra tion of a pro ess onsisting of a temper-


ature sensor in a uid

The part if interest is the sensor, thus we model the sensor dynami s having
already assumed that it an be seen as an internally fast system:

dES
:= q̂E|S − ŵS|E . (9.1)
dt
9.1. PROCESSES 167

The heat ow model approximates the behaviour of the lm by:
q̂E|S := −kE|S AE|S (TS − TE ) , (9.2)
kE|S := given , (9.3)
AE|S := given . (9.4)
and the system volume work term, representing the hange of the volume by:
dVS
ŵS|E := pS . (9.5)
dt

At this point it is appropriate to make some simpli ations and assumptions


that ae t the energy balan e. The rst simpli ation is asso iated with the
fa t that the sensor is not moving about, that is, its kineti energy KS and
potential energy PS is zero:
dKS
:= 0 , (9.6)
dt
dPS
:= 0 . (9.7)
dt
Assuming onstant pressure also seems an appropriate thing to do. Introdu ing
the enthalpy:
H := U + p V , (9.8)
and observing that
dH dU dp dV
:= + V +p , (9.9)
dt dt dt dt
the energy balan e redu es to:
dHS
:= q̂E|S . (9.10)
dt
To omplete the model we have to provide the link between the temperature of
the system and the respe tive fundamental state, namely the enthalpy in this
ase:
Z T
∂H
H := dT , (9.11)
Tr ∂T
Z T
:= Cp (T ) dT . (9.12)
Tr

Assuming we know the heat apa ity as a fun tion of time as a produ t of the
known volume, known, onstant density and the spe i heat apa ity in the
form of a polynomial with the known parameters {ai }:
Cp (T ) := V ρ cp (T ) , (9.13)
X
cp (T ) := ai T i , (9.14)
i
ai := given , (9.15)
V := given , (9.16)
ρ := given , (9.17)
168 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

the model is ompletely spe ied and proper. The dynami s of the sensor are
driven by the temperature of the environment, a fun tion of the given onditions
and parameters.

Ho Ḣ q̂ k A T TE

H(Cp (T ))
impli it in T

V ρ cp (T )

{ai } T

Figure 9.2: Equation - variable graph for the temperature sensor


model
9.1. PROCESSES 169

9.1.2 Evaporating a Water from a Glass


9.1.2.1 Problem Des ription

We assume a idealised situation, in that a glass with onstant diameter is half


lled with water. The gas phase of the water up to the glass' top is stagnant and
water is diusing through this stagnant phase into open spa e above the glass.
Latter is assumed to be uniform and not ae ted by the evaporating water. The
room is not saturated with water and the water ontent is known. You may in
general assume to know all material des ription, in luding the parameters to be
known. The vapour pressure of water in the gas phase of the room is onstant
and known.
Develop a dierential model whi h an be used to ompute on how long it takes
to evaporate all the water from the glass. The temperature in the glass may be
assumed to be onstant and equivalent to the temperature in the room. One
may also assume that all physi al parameters are known. Note that the diusion
pro ess is mu h, mu h faster than the redu tion of the water level. Thus look
at two time s ales one in whi h the diusion takes pla e, with the level of the
glass to be onstant and then the long time s ale in whi h the level hanges.
The rst step will generate the transfer law for the se ond part.

9.1.2.2 Solution

The dynami s of the pro ess has at least three dierent time s ales:

• The fastest is the time in whi h the diusion prole is being established
• The se ond is the time s ale in whi h the diusion is fast ompared to
the hange of the volume of the water in the glass. The justi ation for
this assumption is that the volume of the evaporated water is in 3 order
of magnitudes larger in terms of the volume.
• The third is the slowest one in whi h the level of the water in the glass
hanges.

9.1.2.2.1 The Diusion Equation

Drawing up either an integral balan e or a shell balan e, the diusion law is


obtained (Se tion 9.2.1) Equation (9.225). Taking the hemi al potential as
the driving for e one obtains Fi k's se ond law in omplex form. The normal
derivation would use the omposition as the driving for e, whi h an be seen as
the linearised version of the transport with the hemi al potential as the driving
for e. Sin e we have really only one spe ies to worry about, even though air
is diusing in ounter urrent to repla e the water having moved, the s alar
version of the diusion equation is su ient.
Thus one gets:
∂c(z, t) ∂2µ
:= λ 2 . (9.18)
∂t ∂z
170 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

zE

zW

Figure 9.3: A half full glass of water in a room with uniform on-
ditions

9.1.2.2.2 Getting the Transfer Law: the Se ond Time S ale

On the se ond level one assumes that the level of the water is onstant, whilst
the diusion is fast. Dividing the diusion equation by the diagonal diusion
matrix, and letting the individual diusion oe ient to go to innity:

∂c(z, t) ∂2µ
lim λ−1 := , (9.19)
λ→∞ ∂t ∂z 2
whi h puts eliminates the state of the diusion system.
The prole of the driving for e along the length is thus a linear fun tion:

µ(z) := a z + b . (9.20)

The two parameters a, b an be readily obtained from the boundary onditions.


On the water side, the air is saturated with water damp, whilst on the top
end the vapour pressure of water is xed by the reservoir. Let the saturation
pressure be p∗ then it an be obtained from the equilibrium ondition at the
surfa e:

µW := µD (zW ) , (9.21)
µoW + R T ln 1 := µoD (zW ) + R T ln xG (zW ) . (9.22)

Where the fa t that the water is the only spe ies in the water lump has been
onsidered. With pB being the known barometri pressure and

p∗W
x∗G (zW ) := , (9.23)
pB

the saturation vapour pressure an be al ulated from the known temperature


and the known standard hemi al potentials µoW , µoD (zW ). The slope a is:

x∗G − xE
a := . (9.24)
zW − zE
9.1. PROCESSES 171

Figure 9.4: A detailed model: the water as a lump (W) behaves on


the smallest time s ale like a reservoir, thus uniform, the gas phase
in the glass as a one-dimensional distributed system (D), and the
room as a reservoir (E)

D n̂W |E

Figure 9.5: On the se ond time s ale the diusion system has "lost"
the apa ity. It is redu ed to a pure resistan e.

The ow through the diusion system is of interest:


∂µ(z)
n̂W |E := −k , (9.25)
∂z
:= −k a , (9.26)
x∗ − xE
:= −k G . (9.27)
zW − zE

Now all is ready for the next bigger time s ale:

9.1.2.2.3 Finally the Water is Evaporating

The rest is simple now. The mass balan e for the water lump is drawn up:
ṅW := −n̂W |E , (9.28)
172 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

The ow is as we found:


x∗G − xE
n̂W |E := −k . (9.29)
zW − zE
Whi h must be supplemented with the mapping between the level and the

D n̂W |E

Figure 9.6: On the third time s ale the volume of the water body is
hanging, thus the level dropping.

mass:
nW
VW := , (9.30)
ρW
:= AW zW . (9.31)

This set of equations is to be solved for the se ondary state variable in question,
namely zW :
nW
zW := . (9.32)
AW ρW
The rest of the variables are known: k, zE , xE , µoL , µoG , R, T . Thus the resulting
set of equations is well dened if one in addition spe ies the initial onditions,
the problem an be integrated, a tually in this ase analyti ally.
9.1. PROCESSES 173

9.1.3 The Mixing Plant


The mixing plant onsists of four vessels: two feed tanks feeding the one mixing
tank in the entre, whi h eje ts the produ t to the storage tank.

a b

a b
V̂a|c V̂b|c
n̂a|c n̂b|c

n̂c|d
V̂c|d

d
d

Figure 9.7: Flowsheet of a dynami mixing plant with abstra tion

Generate a text book representation by transforming the omponent mass


balan es into the on entration & volume spa e.

9.1.3.1 Solution

9.1.3.1.1 Behaviour: Component Mass Balan es

The model of a tank with several inputs and outputs is des ribed as an ideally-
stirred tank rea tor. The energy balan e for the system is not of interest, as
no ex hange of energy o urs. Thus it is only the omponent mass balan es to
be established. The omponent mass balan es are a set of ordinary dierential
equations in the omponent mass, whi h for this task we shall transform into
dierential equations in the on entration and the volume. Assuming that there
is no rea tion taking pla e in any of the tanks, the omponent mass balan es
for an arbitrary system S are:

dnS X
= αm n̂m + ñS (9.33)
dt
∀m

With the αm ∈ {−1, +1} giving the referen e dire tion, n̂m the mass ow m
and ñS the rea tion dependent transformation rate.
174 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.3.1.2 Transfer

There is no transfer law given, but it is assumed that the volumetri ow is
known, thus the transfer is given by:
n̂m := km V̂m cm , (9.34)
km := ontrolled, thus known , (9.35)
V̂m := known . (9.36)
The km has been introdu ed merely to demonstrate on where the ontroller
would be onne ted. In this ase, the volumetri ow rate would be the maxi-
mum available and this variable would be adjusted by the ontroller between 0
and 1.
The on entration is the one from the tank the uid is oming from. Mostly
people assume that it may only possibly ome from one tank at all time, that is,
the ow dire tion never hanges. This may or may not be a valid assumption.
Here it seems though reasonable. If this is not the ase, then the on entration
swit hes as the volumetri ow hanges sign!

9.1.3.1.3 Rea tion

There is no rea tion in the tank, thus


ñS := 0 . (9.37)

9.1.3.1.4 State variable transformations

In this se tion, all variables ex ept the fundamental state, whi h are the on-
served quantities, are to be linked ba k to the fundamental state and known
quantities su h as the volumetri ow rate and the density. For the notation,
we use s as a generi index for system meaning that the equations really apply
to any of them, namely the two feed tanks, the mixing tank and the produ t
tank.
The transfer introdu es the on entration. Con entration is dened by :
n
cS := S , (9.38)
VS
Introdu ing volume, whi h is a fun tion of the omponent mass, the basi state:
nS
VS := , (9.39)
ρS
ρS := onst . (9.40)
The density is the molar density and assumed onstant and known. The total
molar mass is obtained as s alar produ t of a one-ve tor with the molar masses
in the system:
nS := eT nS , (9.41)
T
e := [1, 1, . . . , 1] , (9.42)
whi h ompletes the this se tion.
9.1. PROCESSES 175

9.1.3.1.5 Manipulations

Sin e we want the dierential equations in terms of the on entrations, we start


with the variable transformation dening the on entration, re ast it in the
expli it form for the fundamental state and dierentiate with respe t to time
and we do the same for the total molar mass:
dnS dc dVS
:= S VS + cS (9.43)
dt dt dt
dnS ∂ρS dnS dVS
:= T
VS + ρS (9.44)
dt ∂nS dt dt
dnS
:= eT (9.45)
dt
Rearranging the last equation one nds:
 
dVS ∂ρS dnS
−1
:= ρS T
e − T
VS (9.46)
dt ∂nS dt

For the hange in the omposition one nds:


 
dcS dnS dVS
:= VS−1
− cS (9.47)
dt dt dt

We ould have used the fa t of the density to be onstant earlier. But for the
purpose of demonstrating on how it enters the al ulation, we kept it in so
far and it is only now that the assumption is being used to nd the simplied
equations for the hange in the volume:
dVS T dnS
:= ρ−1
S e (9.48)
dt dt
This leads to further simpli ations:
dVS X
= ρ−1
S αm km V̂m eT cm (9.49)
dt
∀m
X X
−1
= ρS αm km V̂m ρm = αm km V̂m (9.50)
∀m ∀m
!
dcS X X
= VS−1 αm km V̂m cm − cS αm km V̂m (9.51)
dt
∀m ∀m
!
X
= VS−1 αm km V̂m (cm − cS ) (9.52)
∀m

The step ρm := eT cm is probably the hardest to see.


The result is now in generi form and an be applied to any of the tanks. In the
ase of no inow, whi h represents any of the two feed tanks, the on entration
hange be omes zero, as expe ted, as the ow on entration cm == cS .
The ratio VS−1 V̂m are the time onstants with respe t to the various ows in
and out the system S .
176 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

Thus the omplete model reads:

dVa
= −ka|c V̂a|c , (9.53)
dt
dca
= 0, (9.54)
dt
dVb
= −kb|c V̂b|c , (9.55)
dt
dcb
= 0, (9.56)
dt
dVc
= ka|c V̂a|c + kb|c V̂b|c − kc|d V̂c|d , (9.57)
dt
dcc  
= Vc−1 ka|c V̂a|c (ca − cc ) + kb|c V̂b|c (cb − cc ) , (9.58)
dt
dVd
= kc|d V̂c|d , (9.59)
dt
dcd  
= Vd−1 kc|d V̂c|d (cc − cd ) . (9.60)
dt

9.1.3.1.6 Systems Representation

We dene the following ve tors:

 
V
 S 
• state ve tor xS :=  , S ∈ {a, b, c, d}
cS

 
ka|c
 
• input ve tor u := 
 
 kb|c


 
kc|d

• output ve tor yS := xS , S ∈ {a, b, c, d}


 
 ca 
 
 
 cb 
 
• onditions γ := 
 
 V̂a|c 

 
 

 V̂b|c 

 
V̂c|d

• parameters: there are no real parameters. The distin tion between pa-
rameters and onditions is though not quite sharp. We use the rule that
if it is a state that is known than it is a ondition.
9.1. PROCESSES 177

Remains to use these denitions and rewrite the equations in this new notation:
   
ẋa −u1 γ3
   
   

 ẋa  
  0 

   
   

 ẋb  
  −u2 γ4 

   
ẋb   0
   
(9.61)
 
 = .
   

 ẋc  
  u 1 γ3 + u 2 γ4 − u 3 γ5 

   
x−1 (u γ (x − ) + u γ (x − ))
   
 ẋc   c 1 3 a xc 2 4 b xc 
   
   

 ẋd  
  u 3 γ5 

   
ẋd xd−1 (u3 γ5 (xc − xd ))
178 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.4 Mixing Tank with Fast Rea tion


Mixing tanks we nd often in plants as main rea tors, but also as make-up
operations su h as neutralisation, whi h is the pro ess we shall have a look at
there. Neutralisation is a rea tion between a a idious hemi al omponent and
a base, most of whi h are very fast. Thus the example is used to demonstrate
model redu tion for fast transposition.

9.1.4.1 Step 0 Abstra tion

The pro ess stru ture is essentially identi al to the generi mixing plant dis-
ussed above. Thus we refer to gure Figure 9.7. The obvious dieren e is that
we now have a rea tion in the ontents of the rea tor going on, whi h is:
H2 CO3 ↔ H + + HCO3− (9.62)
HCO3− ↔H ++
CO32− (9.63)
H2 O ↔ H + OH+ −
(9.64)
N aOH ↔ N a + OH + −
(9.65)
N aHCO3 ↔ N a + HCO3−
+
(9.66)
N a2 CO3 ↔ 2 N a+ + CO3−2 (9.67)

9.1.4.2 Step 1: Behaviour

Assuming that we operate with diluted solutions, the energy household is not
of interest and we only need to fo us on the omponent mass balan es to get a
reasonable des ription of the pro ess. Let the system A be the feed tank a and
system B the feed tank B whilst the rea tor ontents we label with R and the
produ t tank with D
ṅR := n̂A|R + n̂B|R − n̂R|P + VR NT η̃ R . (9.68)

9.1.4.3 Step 2a: Transport

The transport equations are not needed as one assumes knowing the volumetri
ows in ea h of the streams. Thus the representation of the omponent ows
redu es to a simple transformation:
n̂a|b := ca|b V̂a|b a|b ∈ {A|R, B|R, R|P } . (9.69)
Sin e the ows an reasonably be assumed unidire tional, the intensive property
of the stream is the one of the sour e, thus ca|b = ca .

9.1.4.4 Step 2b: Transposition

The rea tion onsists of disso iation rea itions for the hydro arbonate in two
stages and the sodium hydroxide. The se ond disso iation is nearly omplete,
thus one ould onsider ignoring it in the set of equilibrium rea tions.
9.1. PROCESSES 179

Let the spe ies set be:


− 2−
A := { H2 CO3 , HCO , CO , H+ , OH − , H2 O, N aOH, N a+ , N aHCO3
, N a2 CO3}
3 3
{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

The stoi hiometry takes the form:


 
-1 1 0 1 0 0 0 0 0 0
 
 

 1 −1 0 −1 0 0 0 0 0 0 

 
0 −1 1 1 0 0 0 0 0 0
 
 
 
 

 0 1 −1 −1 0 0 0 0 0 0 

 
 

 0 0 0 1 1 −1 0 0 0 0 

 
0 0 0 −1 −1 1 0 0 0 0
 
 
N := 




 0 0 0 0 1 0 −1 1 0 0 

 
0 0 0 0 −1 0 1 −1 0 0
 
 
 
 

 0 1 0 0 0 0 0 1 −1 0 

 
 

 0 −1 0 0 0 0 0 −1 1 0 

 
0 0 1 0 0 0 0 2 0 −1
 
 
 
0 0 −1 0 0 0 0 −2 0 1

9.1.4.5 Step 3: Variable Transformations

Little is needed here. The main one is the link between the molar omposition
and the omponent mass:

c := V −1 n , (9.70)

with the volume given by:

V := ρ n , (9.71)

and the total mass being:

n := eT n , (9.72)

with the ve tor e := [1, 1, . . . , 1].


To make the link to the experimental setup, one needs to introdu e the pH at
this point:

y := − log(c4 ) . (9.73)

where the on entration c4 is the on entration of the 4th spe ies, whi h is the
proton H + .

9.1.4.6 Step 4: Conditions

Assuming the density is onstant as the solvent dominates, the above equations
are omplete.
180 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.4.7 Step 5 : Fast Rea tions

To eliminate the unknown fast rea tion rates from the balan e equations, the left
null-matrix of the transposed stoi hiometri matrix must be omputed, whi h
is:
 
 0 0 0 0 1 1 1 0 0 0 
 
 0 −1 −2 1 −1 0 0 1 0 0 
 
Ω := 
 

 2 1 0 1 0 1 0 0 1 0 
 
 
−1 0 1 −1 0 −1 0 0 0 1

The rank of Ω is 4. Thus of the 10 spe ies, 4 ome from the dynami balan es
and the rest ome from the six equilibrium relations that omplete the model
equations:

0 := k(c) . (9.74)

9.1.4.8 The Redu ed Model

The nal model is assembled qui kly.


The omponent mass balan e for the ontents:

Ω ṅR := Ω n̂A|R + Ω n̂B|R − Ω n̂R|P . (9.75)

9.1.5 Example: Linear Heat Condu tor


Transfer systems are very often simplied by assuming the transfer system to
be fast ompared to the parts of the system it is onne ted to. This assumption
results in an ideal behaviour of a physi al transfer system. When implementing
this assumption, the des ription of the physi al transfer system redu es to a
simple resistan e to the ow of the transferred extensive quantity. Though,
it should be remembered that in reality the onne tion represents a simplied
behaviour of a physi al transfer system whi h on the short time s ale does
exhibit apa ity ee ts.

- T (x, t) -

Figure 9.8: Transfer of extensive quantity from one system to an-


other via a distributed transfer system.

The example we use is a ondu tive heat transfer through a wall. The Figure 9.8
shows the simplest possible arrangement in whi h two apa ities are ouple by
9.1. PROCESSES 181

a distributed heat transfer system, for example a solid wall separating two uid
bodies. The energy balan e an be transformed to the well known heat diusion
equation of Fourier.
Let T (x, t) be the temperature as a fun tion of the spatial o-ordinate x and the
time t, then the temperature prole is obtained by integrating the heat diusion
equations:

∂T (x, t) ∂ 2 T (x, t)
:= α . (9.76)
∂t ∂x2
Assuming a very fast transfer, whereby very fast is to be taken relative to the
dynami s of the atta hed systems, then one may assume that the adjustment of
the temperature prole in the distributed transfer system is rea hing the equi-
librium state o urs instantaneously. The left-hand side is then zero and the
prole is readily omputed as linear between the temperatures of the boundaries,
whi h are the temperature of the two guarding systems. The state of the sim-
plied system is thus eliminated from the dynami s and may be re onstru ted
from the states of the two atta hed systems.
182 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.6 The Game of Side Streams


In deriving the transfer laws, we implemented steady-state assumptions for the
(distributed) transfer system. Not all transfer systems ommuni ate extensive
quantities as single-input-single-output systems. In some ases there are also
side streams to be onsidered. For our illustration we take a heat transfer system
where two heat sour es are onne ted through a at slab, whi h ondu ts heat
to the area above its surfa e, the room, and whi h is insulated towards the
below.
Axy

Ayz
Ayz
w
z y

x
l

Figure 9.9: A slap of some heat ondu tive material, being heated
on one side and ooled on the other, loosing heat through the top
only. The sides and the bottom are thus ideally insulated

We may des ribe this pro ess with the PDE:


∂T (x) k ∂ 2 T (x) ay w
= − (T (x) − TE ) . (9.77)
∂t ρ cp ∂x2 ρ cp Ayz
To eliminate the apa ity ee ts of the slab, we assume steady state onditions
for the slab and re-arrange the onstants slightly
∂ 2 T (x)
0 := − ξ (T (x) − TE ) , (9.78)
∂x2
where
ay w
ξ := . (9.79)
kAyz

Integrating twi e, evaluation of the integration onstants using the boundary


onditions, namely that the temperature on the left is the temperature of the
system on the left and similarly on the other side, the slab being of the length
l:
T (0) := TL , (9.80)
T (l) := TE . (9.81)
9.1. PROCESSES 183

L S R
q̂L|S q̂S|R

Figure 9.10: Abstra t representation of the slap heated on one side


and ooled on the other with a distributed heat loss through the top
to the environment.

The solution is then

 √ √  √
cosh( ξl) sinh( ξx) p sinh( ξx)
T (x) := − √ + cosh( ξx) ∆TL + √ ∆TR ,
sinh( ξl) sinh( ξl)
(9.82)

In the next step, the heat loss is omputed by integrating the heat loss equation
over the surfa e whi h is of width ay , that is one assumes the other side is
insulated:

Z l
q̂S|E := q̂S|E (x)dx , (9.83)
0
Z l
:= (−w ay ) (TE − T (x))dx , (9.84)
0 √   √ 
−1 + e ξl ay w −1 + e ξl ay w
:= √ √  ∆TL + √ √  ∆TR , (9.85)
ξ e ξl + 1 ξ e ξl + 1

dening the temperature dieren es

∆TL := TL − TE , (9.86)
∆TR := TR − TE , (9.87)
184 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

one nds through dierentiation the heat ow on both ends:



∂T (x)
q̂L|S := −k Ayz , (9.88)
∂x L
√ √ √
kAyz cosh( ξl) ξ kAyz ξ
q̂L|S := √ ∆TL − √ ∆TR , (9.89)
sinh( ξl) sinh( ξl)

∂T (x)
(9.90)

q̂S|R := −k Ayz ,
∂x R
√ √ √
kAyz ξ kAyz ξ cosh( ξl)
q̂S|R := √ ∆TL − √ ∆TR . (9.91)
sinh( ξl) sinh( ξl)

q̂S|E

L S R
q̂L|S q̂S|R

Figure 9.11: Assuming a zero apa ity ee t of the slap yields a
kind of heat-splitter.

The three heat streams, namely q̂S|E , q̂L|S and q̂S|R sum up to zero, as an be
shown with a little of tedious al ulations. All three streams an be represented
as a fun tion of the two temperature dieren es, thus also depend on all three
temperatures. This result says, that the paradigm of onne tions only being
dened between two elementary systems annot be retained if one insists of
eliminating systems with side streams by the means of steady state or negligible
apa ity assumption. Two onsequen es an be drawn from this result,
1. Either the redu tion to a zero apa ity is not allowed for systems with side
streams or
2. a skeleton of the transfer system must be retained in the topology in whi h
the streams meet and sum to zero. Any rea tion only adds a term but does
not ae t the result stru turally.
9.1. PROCESSES 185

9.1.7 Marinading a Steak


Diusion pro esses are quite ommon in nature. A tually they are present al-
most always when solids meet gases or liquids or gases meet liquids. Marinading
a steak is thus just a pra ti al example for a large lass of pro esses.
We shall look at a very simple ase in whi h we assume that the marinade on-
sists essentially of water and salt with the salt diusing into the meat, whi h in
tern is assumed to be essentially stationary water. The geometry of the prob-
lem is simplied in that the steak is assumed to have only two a tive surfa es,
namely the two big ones, whilst we assume that the sides are sealed with for
example fat. In a rst ase we pla e the steak at on the oor of a pan topping
it up with marinade.

9.1.7.1 Step 0 Abstra tion

The pro ess is sket hed qui kly Figure 9.12 However, depending on the time

steak
marinade

Figure 9.12: The steak laying at in a pan.

s ale we hoose the abstra tion ould look quite dierently.


In the rst ase (Figure 9.13) we look at a relatively short time s ale assuming
that the marinade is not hanging over time, thus the ex hange with the steak
is negligible. A diusion lm is assumed to form on the surfa e of the steak
whilst the diusion really does not penetrate the steak signi antly.
In se ond ase (Figure 9.14) a larger time s ale is onsidered where the marinade
omposition is still not hanging, but the uid lm is onsidered unimportant
ompared to the mixing in the marinade. The steak is modelled as a one-
timensional diusion medium.
In the third ase (Figure 9.15) the marinade on entration is onsidered to
hange.
In the fourth ase (Figure 9.16) one assumes that the marinade is not moving
at all but behaves like a one-timensional diusion medium.
Below we shall dis uss ase 3 and 4.
186 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

marinade r lm r steak

Figure 9.13: Topology assuming marinade to be well mixed and not


hanging with lm dynami and steak dynami to some depth.

marinade r steak

Figure 9.14: Topology assuming marinade to be well mixed and not


hanging, no lm dynami and steak dynami to omplete depth.

marinade r steak

Figure 9.15: Topology assuming marinade to be well mixed and but


hanging, no lm dynami and steak dynami to omplete depth.

r marinade r steak

Figure 9.16: Topology assuming marinade to be not mixed and


distributed like the steak.
9.1. PROCESSES 187

9.1.7.2 Step 1: Behaviour

We assume the steak to be of uniform thi kness dS and the marinade being of
depth dM . Further, we introdu e a o-ordinates system, whereby the problem
is onsidered one-dimensional. The o-ordinate is labelled with r.

9.1.7.2.1 Case 3

Labelling the marinade with subs ript M and the steak with S the behaviour
for ase 3 is given by:
ṅM = −n̂M|S , (9.92)
∂cS ∂ 2 µS
= KS , (9.93)
∂t ∂r2
eq BC µM (−ǫ) = µS (+ǫ) , (9.94)
ow BC n̂M|S (−ǫ) = n̂M|S (+ǫ) , (9.95)
ow BC n̂M|S (dS ) = 0 . (9.96)

The boundary onditions a t as oupling equations. At the interfa e to the


marinade, the boundary onditions ree t ontinuity in the hemi al potential
and the mass ow, whilst on the other side of the steak the boundary onditions
merrely says that the ow is zero.

9.1.7.2.2 Case 4

For ase 4, the well-mixed assumption for the marinade is repla ed by a no


mixing, purely diusion assumption:

∂cM ∂ 2 µM
= KM , (9.97)
∂t ∂r2
∂cS ∂ 2 µS
= KS , (9.98)
∂t ∂r2
eq BC µM (−ǫ) = µS (+ǫ) , (9.99)
ow BC n̂M|S (−ǫ) = n̂M|S (+ǫ) , (9.100)
ow BC n̂M|S (−dM ) = 0 , (9.101)
ow BC n̂M|S (dS ) = 0 . (9.102)

In both ases the initial onditions must be supplemented.

9.1.7.3 Step 2a: Transport

The transport equation is simply the gradient law in both media:


∂µ
n̂M|S := −K A . (9.103)
∂r
With the transport properties K and the boundary surfa e A being given.
188 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.7.4 Step 3: Variable Transformation

The transport introdu es the hemi al potential. Thas what we require is a


mapping of the onserved state variables to the hemi al potential. Using the
model:

µ := µo + R T ln x . (9.104)

This introdu es the mole fra tions, whi h need to be the result of mapping the
omponent mass:

x := n−1 n . (9.105)

And

n := eT n . (9.106)

With eT := [1, 1, . . . , 1].


The distributed models require the on entration, so we add:

c := V −1 n , (9.107)
V := ρ −1
n. (9.108)

Assuming the density ρ to be onstant and known ompletes the transformation


dentions.

9.1.7.5 Step 4: Conditions

There is no rea tion taking pla e and the temperature is assumed to be on-
stant. With the hemi al potentials at normal onditions being given, the set
of transformations is omplete.

9.1.7.6 Step 6: Maniputlations

The partial dierential equations are being dis retised in the spatial o-ordinate
using a 3-point approximation and using the index k for the points on the regular
grid of width ∆r (Se tion 3.1). The dis retisations for ase 4 is introdu ing the
indexing s heme 0,1,2,. . . , n, n+1,. . . , n+m with point 0 representing the outer
surfa e of the marinade (no ow ondition), n representing the boundary to the
steak, and n+m the outer surfa e of the steak (no ow ondition). Obviously
for ase 3 this simplies by having n = 0.
For the internal points, we thus write:


∂ 2 µM µk−1 − 2 µk + µk+1
2 := . (9.109)
∂r (∆ r)2
k
9.1. PROCESSES 189

At the extreme points (dS , −dM ), the equation writes:


∂ 2 µM µ − 2 µ1 + µ2
2 := 0 , (9.110)
∂r (∆ r)2
0
∂ 2 µM µ − 2 µn+m−1 + µn+m
2
:= n+m−2 . (9.111)
∂r (∆ r)2

n+m

whi h is supplemented with the no-ow ondition:

µ1 − µ0
n̂M|S (−dM ) := := 0 , (9.112)
∆r
µn+m − µn+m−1
n̂M|S (dS ) := := 0 . (9.113)
∆r

Alternatively, one an model the extreme boundary points slightly dierently by


introdu ing the no ow ondition indire tly. One introdu es a imaginary point
outside the boundary and introdu es the ow ondition by assuming symmetry
at the boundary, thereby implying a zero ow ondition at the boundary:


∂ 2 µM µ−1 − 2 µ0 + µ1
2 := , (9.114)
∂r (∆ r)2
0
∂ 2 µM µ − 2 µn+m + µn+m+1
2
:= n+m−1 , (9.115)
∂r (∆ r)2

n+m

With the symmetry µ−1 = µ1 and µn+m−1 = µn+m+1 the two expressions
simplify to:


∂ 2 µM −2 µ0 + 2 µ1
2 := , (9.116)
∂r (∆ r)2
0

∂ 2 µM 2 µn+m−1 − 2 µn+m
2
:= , (9.117)
∂r (∆ r)2

n+m

9.1.7.6.1 Case 3

Observing that n := 0 for ase 3 we have the ordinary dierential equation


des ribing the behaviour of the tank:

µ1 − µ0
ṅ0 := −KS A . (9.118)
∆r
190 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

And the matrix equation for the steak:

    
 −2M M   µ1 µ0 
 
 
ċ 1
    
−2M  µ
      
   M M   0 
2
 ċ2  . .. 
      
 .. .. .. 
..
 
 .  := 
   . . .  + . 
,
.
  
 .      
      
  
 M −2M   µm−1
M    
  0  
ċm     
2M −2M µm 0
,
(9.119)

with M := ∆r −2 KS .

9.1.7.6.2 Case 4

For ase 4 we get rst have sort out the boundary between the two phases by
omputing the missing µn from the boundary ondition:

−(∆rM )−1 KM (µn − µn−1 ) := −(∆rS )−1 KS (µn+1 − µn ) . (9.120)

with R := ∆rS
∆rM K−1
S
KM . Whi h gives:

−1  
µn := R + I µn−1 + R µn+1 . (9.121)

This then substituted into the expressions for the approximations for the two
points left and right the boundary:


∂ 2 µ µn−2 − 2 µn−1 + µn
:= , (9.122)
∂r2 ∆r2

n−1
−1  
µn−2 − 2 µn−1 + R + I µn−1 + R µn+1
:= , (9.123)
 ∆r2 
−1 −1
µn−2 + R+I − 2 I µn−1 + R + I R µn+1
:= ,
∆r2

(9.124)
9.1. PROCESSES 191

and


∂ 2 µ µn − 2 µn+1 + µn+2
:= , (9.125)
∂r2 ∆r2

n+1
−1  
R+I µn−1 + R µn+1 − 2 µn+1 + µn+2
:= 2
, (9.126)
−1  ∆r  
−1
R+I µn−1 + R + I R µn+1 − 2 I µn+1 + µn+2
:= ,
∆r2

(9.127)

The equations an be olle ted into a matrix representation:

ċ := L µ , (9.128)
192 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

 
 
 µ0 
ċ0  
µ1
   
   
ċ1 ..
   
   

 ..



 . 


 . 





   µ 
   n−2 
 ċn−2   
 µn−1
   
with and
  
ċ := 
 ċn−1 ,
 µ := 

,

   µ 
   n+1 
 ċn+1   
 µ
   
  
n+2
ċn+2 ..
   
   

..
  . 
.
   
   
   
   µ 
 m+n−1 
ċn+m  
µm+n

 
 −2M 2M 
 
−2M
 
 M M 
 
 .. .. .. 

 . . . 

 
 

 M −2M M 

 
M MQ MQ
 
 
L :=  1 2 .
 
 SQ SQ S 
2 1
 
 
−2S
 
 S S 
 
 .. .. .. 

 . . . 

 
 

 S −2S S 

 
2S −2S

Where

M := ∆r −2
M KM ,

S := ∆r −2
S KS ,
−1
Q := R + I − 2I,
1
−1
Q := R + I R.
2

Figure 9.17 and Figure 9.18 show some results from simulations.
9.1. PROCESSES 193

0.9

0.8

0.7

0.6
concentration

0.5

0.4

0.3

0.2

0.1

0
0 20 40 60 80 100 120 140 160 180 200
marinade −−−−−− position −−−−−−−−−− steak

Figure 9.17: Mesh plot for marinading steak problem

0.9

0.8

0.7

0.6
concentration

0.5

0.4

0.3

0.2

0.1
m 0
ar
ina
de 0
−−
−−
−− 50
po
sit
ion 100
−− 0
−− 10
−− 150 20
−− 30
−−
ste 200 40
ak 50
time

Figure 9.18: Mesh plot for marinading steak problem


194 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.8 2D-Heat Dissipation in a Fin


Se tion 9.1.6 introdu ed a heat diusion and heat dissipation pro ess. Here we
model the same pro ess but this time in 3D, that is the temperature in the
slab is not uniform in the verti al dire tion but varies. The 2D des ription still
assumes uniform temperature in the y-dire tion and no edge ee ts.

∂ 2 T (z, x) ∂ 2 T (z, x)
 
∂T (z, x) k ay w
= + − (T (0, x) − TR ) .
∂t ρ cp ∂x2 ∂z 2 ρ cp Ayz
(9.129)

Dis retisation is done in the two o-ordinates z and x. For simpli ity we use
the same dis retisation quantity in both dire tion, all it h. Using the index j
for the x o-ordinate and the index i for the z o-ordinate, the grid as shown
in gure Figure 9.19. The empty dots represent the internal points, whi h
in ludes the two exposed boundaries: on the bottom being insulated on the top
exposed to the air in the room. The two boundaries left and right are driving
the pro ess and are assumed to be given as onditions. Note that the order
of the spatial o-ordinates. Based on the grid one an propose dierent nite
dieren e approximations. Here we use the most ommon, but also the most
simple 3-point nite dieren e approximation.

0 1 2 3 4 5 6 7 8 9 10 11 n+1
0

z 2

m+1

Figure 9.19: Grid arrangment for the 2-D ase study of the ooling
n

The dis rete model using the notation Tindex in z o-ordinate, index in x o-ordiante
9.1. PROCESSES 195

then takes the form:

Ṫi,j := α (Ti−1,j + Ti+1,j − 4 Ti,j + Ti,j−1 + Ti,j+1 ) (9.130)


for i := 1, . . . , m
for j := 1, . . . , n

For the top row of the grid:

Ṫ0,j := α (−4 T0,j + 2 T1,j + T0,j−1 + T0,j+1 ) − β (T0,j − TR ) (9.131)


for j := 1, . . . , n

For the bottom row of the grid:

Ṫm+1,j := α (−4 Tm+1,j + 2 Tm,j + Tm+1,j−1 + Tm+1,j+1 ) (9.132)


for j := 1, . . . , n

Where:

k −2
α := h (9.133)
ρ cp
ay w
β := h (9.134)
ρ cp Ayz
(9.135)

Next step is to ast these equation into a matrix state-spa e form. The state is
the temperature of the internal node points and the input is the points in the
left and the right boundary and the room temperature.

x := [Ti,j ]Tj:=1,...,n i:=1,...,m (9.136)


 

ul := [T0,j ]j:=0,...,n+1 (9.137)


ur := [Tn+1,j ]j:=0,...,n+1 (9.138)
u := [uTl , uTr , TR ]T (9.139)

The dynami state spa e representation is onstru ted in steps:


   
ẋ := α As + β Ab x + α Bs + β Bb u (9.140)
ẋ := A x + B u (9.141)
(9.142)

All of the matri es are sparse and ree t the regular patterns of the grid and
the weights of the nite dieren e approximation. The matri es Ab and Bb are
196 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

01 02 03 04 11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

01 −4 +1 +2

02 +1 −4 +1 +2

03 +1 −4 +1 +2

04 +1 −4 +2

11 +1 −4 +1 +1

12 +1 +1 −4 +1 +1

13 +1 +1 −4 +1 +1

14 +1 +1 −4 +1

21 +1 −4 +1 +1

22 +1 +1 −4 +1 +1

23 +1 +1 −4 +1 +1

24 +1 +1 −4 +1

31 +1 −4 +1 +1

32 +1 +1 −4 +1 +1

33 +1 +1 −4 +1 +1

34 +1 +1 −4 +1

41 +2 −4 +1

42 +2 +1 −4 +1

43 +2 +1 −4 +1

44 +2 +1 −4

01 02 03 04 11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

01 +1

02 +1

03 +1

04 +1

00 10 20 30 40 05 15 25 35 45 R

01 +1

02

03

04 +1

11 +1

12

13

14 +1

21 +1

22

23

24 +1

31 +1

32

33

34 +1

41 +1

42

43

44 +1

00 10 20 30 40 05 15 25 35 45 R

01 −1

02 −1

03 −1

04 −1

Figure 9.20: The four matri es for the n 2-D dis rete represenation
9.1. PROCESSES 197

ea h split into two parts:


 
diag [e] 0
Ab :=  (9.143)
 

0 0
 
0 e 
Bb :=   (9.144)
0 0
e := [1, . . . , 1]T ∈ Rn (9.145)

Figure 9.20 shows the four matri es: As , bb , Bs , bb for the ase where n := 3
and m := 4. The pro ess is driven by the temperatures at the left and the right
boundaries being the grid points i := 0, . . . , 4 , j := 0 and i := 0, . . . , 4 , j :=
5 and the room temperature labelled with R.
198 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

110

100 100

90
80
80

70 60
60

50 40

40
20
30

20 0
20 20

15 50 15 50
40 40
10 10
30 30
5 20 5 20

time: 00 0 0
10
0 0
10

110

100 100

90
80
80

70 60
60

50 40

40
20
30

20 0
20 20

15 50 15 50
40 40
10 10
30 30
5 20 5 20

time: 10 0 0
10
0 0
10

110

100 100

90
80
80

70 60
60

50 40

40
20
30

20 0
20 20

15 50 15 50
40 40
10 10
30 30
5 20 5 20

time: 20 0 0
10
0 0
10

110

100 100

90
80
80

70 60
60

50 40

40
20
30

20 0
20 20

15 50 15 50
40 40
10 10
30 30
5 20 5 20

time: 40 0 0
10
0 0
10

110

100 100

90
80
80

70 60
60

50 40

40
20
30

20 0
20 20

15 50 15 50
40 40
10 10
30 30
5 20 5 20

time: 80 0 0
10
0 0
10

Figure 9.21: 2-D n dynami s: left old (50), right hot (100), front
room (75 | 0)
9.1. PROCESSES 199

9.1.9 Dynami Flash


The ash is a liquid and a gas phase sharing a ommon ontainment. Su h
pro esses appear where ever a liquid phase is in onta t with a gas phase. One
of the main appli ations is in separation, namely distillation, whi h is nothing
else than a verti al pile of ashes.

Figure 9.22: A simple ash, ideally insulated

The obje tive is to model a dynami ash and then gradually apply simplifying
assumptions.

9.1.9.1 A First Abstra tion

It seems reasonable to assume the two phases to be well mixed in the bulk and
only exhibiting hanges in the intensities in the neighbourhood of the bound-
ary, if one assumes the pro ess to be ideally insulated. For the lo al hanges
on may assume a lm theory model with no apa ity ee ts, thus a Nernst
approximation for the lm.

boundary

L lm lm G

ondu tive heat ow n̂


mass ow n̂
volume work (moving boundary) n̂
B

L G

Figure 9.23: A possible abstra tion with a simple stru ture


200 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.9.2 The Base Model

9.1.9.2.1 Balan es

ṅL := −n̂L|B , (9.146)


ṅB := n̂L|B − n̂B|G , (9.147)
ṅG := n̂B|G , (9.148)
UL := −(ÛL|B + pL|B V̂L|B ) − q̂L|B − ŵL|B , (9.149)
UL := −ĤL|B − q̂L|B − ŵL|B , (9.150)
UB := ĤL|B + q̂L|B + ŵL|B − ĤB|G − q̂B|G − ŵB|G (9.151)
UG := ĤB|G + q̂B|G + ŵB|G . (9.152)

9.1.9.2.2 Transport

n̂a|b := −Kna|b a|b ∈ {L|B, B|G} , (9.153)


Ĥa|b := −hTa|b n̂a|b , (9.154)
q
q̂a|b := −ka|b (Tb − Ta ) , (9.155)
ŵa|b := pa V̇a . (9.156)

9.1.9.2.3 Transformations

We assume that for the liquid phase and for the gas phase we have a model in
the form of an energy fun tion. The model is given as a Helmholtz surfa e:

As := As (Ts , Vs , ns ) , (9.157)

and for the liquid phase we also provide the density as a fun tion of the tem-
perature:

ρL := ρ(TL ) , (9.158)

and

VL := eT nL /ρL . (9.159)

A Legendre transformation links the internal energy to the Helmholtz energy:


∂As
Us := As + Ts , (9.160)
∂Ts
whi h gives also:
∂As
ps := − , (9.161)
∂Vs
9.1. PROCESSES 201

and
∂As
µs := . (9.162)
∂ns

The transformations must link the se ondary state variables being used for the
transport equations to the primary state, namely the omponent mass and the
internal energy of the two apa ities L and G. The sequen e the is:

• VL from Equation (9.159)(L)

• TL from Equation (9.157)(L) and Equation (9.160)(L)


• µL from Equation (9.162)(L)
• VG := V − VL
• TG from Equation (9.157)(G) and Equation (9.160)(G)
• pG from Equation (9.161)(G)
• µG from Equation (9.162)(G)

In addition we observe that

• The total volume is onstant thus V̇L = −V̇G


• and the me hani al equilibrium between the two phases indu es that
pL = pG
• thus ŵL|B = ŵB|G

9.1.9.3 Manipulations
202 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.1.9.3.1 Boundary

The boundary has no apa ity, thus the a umulation terms in the respe tive
balan e equations are zero. Consequently the boundary has no state and thus no
intensive properties an be derived from state transformations. The intensive
properties at the boundaries are the lo al ones at the limit of the respe tive
phase. They must be extra ted from the stationary balan es, whi h imply:

n̂L|B = n̂B|G =: n̂L|G , (9.163)

and the energy balan e simplies to:

0 := ĤL|B + q̂L|B − ĤB|G − q̂B|G . (9.164)

From these two equations the hem i al potential at the boundary and the tem-
perature is extra ted. For the hemi al potential the solution is found qui kly
after substituting the transfer laws:
 −1  
µB := KnL|B + KnB|G , KnL|B µL + KnB|G µG . (9.165)

Solving the energy balan e for the temperature annot readily be done analyti-
ally if the partial molar enthalpies are a nonlinear fun tion of the temperature.
If the onstant-pressure spe i heat apa ities an be assumed onstant and
all involved transport oe ients are onstant, then the problem is linear in the
boundary temperature and an thus be readily solved.

9.1.9.3.2 Assumption: Fast heat transfer in liquid

9.1.9.3.3 Assumption: Fast overall heat transfer

9.1.9.3.4 Assumption: Fast diusion in liquid

9.1.9.3.5 Assumption: Fast overall diusion

9.1.9.3.6 Assumption: negligible apa ity for the gas phase


9.1. PROCESSES 203

9.1.10 Multi-loop mixing and singular perturbation


9.1.10.1 Abstra t Pro ess

The pro ess des ribes for example a mixing pro ess. It has two internal y les.
A inow drives the system. A onstant volume assumption is made for ea h
individual part, thus also for the overall pro ess.

V̂k|a V̂a|b V̂b|c V̂c|d V̂d|l


k a b d l

V̂c|f

V̂i|a high ow f low ow V̂d|c


V̂f |g

i h g e
V̂h|i V̂g|h V̂e|g

Figure 9.24: A seamingly omplex mixing pro ess with the framed
quantities being given, besides the feed omposition. The boxed
volumetri ows are given and so is the ompostion on the inow
to the plant. All ows are assumed to be unidire tional, thus ow
dire tion does not hange during the pro ess' operation.

The overall node set is split into internal and external nodes:
S := {a, b, c, d, e, f, g, h, i, k, l} (9.166)
Si := {a, b, c, d, e, f, g, h, i} (9.167)
Se := {k, l} (9.168)
Similarly the stream set splits into a internal and an external stream set:
M := {a|b, b|c, c|d, d|e, e|g, c|f, f |g, g|h, h|i, i|a, k|a, d|l} (9.169)
Mi := {a|b, b|c, c|d, d|e, e|g, c|f, f |g, g|h, h|i, i|a} (9.170)
Me := {k|a, d|l} (9.171)

9.1.10.2 Model

9.1.10.2.1 The balan es

The omponent mass balan es are established qui kly1 :


¯ := F̄ n̂
ṅ ¯ (9.172)
1 For larity reasons, all blo k obje ts, ve tors and mari es, are marked with a bar
204 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

All ve tors and matri es are blo k ve tors and blo k matri es. In this ase, the
ow matrix is the Krone ker produ t

F̄ := F ⊗ I (9.173)

of the in iden e matrix of the graph F and an identy matrix of appropriate


dimension.
The in iden e matrix of the overall system, thus in luding the environment
systems, is:

a|b b|c c|d d|e e|g c|f f |g g|h h|i i|a k|a d|l

a −1 +1 +1

b +1 −1

c +1 −1 −1

d +1 −1

e +1 −1 −1
Fo :=
f +1 −1

g +1 +1 −1

h +1 −1

i +1 −1

k −1

l +1
(9.174)

9.1.10.2.2 The onstant volume assumption

The onstant volume assumption requires the denition of the volume as a


fun tion of the state, namely the mass. The total mass for the ompartment s
is:

ms := eT ns (9.175)
:= ρ Vs (9.176)

Dierentiation and the assumption of onstant density

!
ρ:= onstant (9.177)
ṁs := ρ V̇s (9.178)
9.1. PROCESSES 205

The mass balan e then is


¯,
I ⊗ eT ṅs := I ⊗ eT F̄s n̂ (9.179)
 
 
ṁ := I ⊗ eT ¯, (9.180)

Fs ⊗ I n̂
 
:= I Fs ⊗ eT I n̂¯, (9.181)
 
:= Fs ⊗ eT n̂ ¯, (9.182)
:= Fs m̂ , (9.183)

The last is found by simple by inspe tion2 . Thus for the arbitrary ompartment
s one gets

ṁs := Fs m̂ , (9.184)
ρ ṁs := Fs ρ m̂ , (9.185)
V̇s := Fs V̂ . (9.186)

The onstant volume assumption


!
Vs := onstant (9.187)

leads for a ompartment to

0 := Fs V̂ , (9.188)

and for the sta k of ompartments

0 := F V̂ (9.189)

For the al ulation of the unknown ow rates, we dene the two sele tion ma-
trixes, the rst sele ting the known volumetri ows being Sk , whilst the un-
known ones Sn . the two matri es are used to split the onstant volume equation
into two parts:

0 := F STk Sk V̂ + F STn Sn V̂ . (9.190)

Dening

Bv := F STk (9.191)
Av := F STn (9.192)
V̂k := Sk V̂ (9.193)
V̂n := Sn V̂ (9.194)

makes it easy to solve for the unknown ows:

V̂n := −A−1
v
Bv V̂k . (9.195)

Obviously this requires that the matrix Av is invertible.


2 We also used the fa t that (A ⊗ B) (C ⊗ D) = A C ⊗ B D
206 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

Note: that the onstant volume assumption implies:

ṁs := Fs m̂ = 0 , (9.196)
eT ns = onstant . (9.197)

This in turn implies that the state for ea h ompartment is redu ed by one !
The omponent mass ve tors and the onsequently derived omposition ve tors
are redu ed by one dimension. For a tra er-solvent system this implies that one
only requires the equations for the tra er making all states s alar.

9.1.10.2.3 The ABCD representation

Being interested in the omposition history, we transform the representation


into the on entration spa e. The transformation is based on

c := V −1 n , (9.198)

The property of the stream, the intensities, are given by the physi al sour e of
the stream, thus depend on the physi al dire tion of the ow. If the stream
hanges dire tion, then the properties swit h too. Assuming that the ows are
uni-dire tional, thus do not hange as part of the pro ess operation, the physi al
sour e of the ows is xed and an be extra ted from the graph, thus from the
in iden e matrix.
For this purpose the in iden e matrix is split into three parts rst, namely the
subnetwork for the internal streams and nodes only, the part asso iated with
the inows and the part asso iated with the outows: The partition is a hieved
by row sele tion. Let

F̄ :: The in iden e matrix of the graph representing the plant


and its environment nodes
F̄ := [fs,m ]∀s∈S,∀m∈M
r :: The ve tor of olumn indi es asso iated with all internal nodes
i :: The ve tor of row indi es asso iated with internal streams
f :: The ve tor of row indi es asso iated with inow streams
o :: The ve tor of row indi es asso iated with outow streams

The matrix
" ( #
−fm,s ; if fm,s = −1
P := pm,s := (9.199)
0 ; otherwise
m:=i+o,s:=r

and the blo ked version is

P̄ := P ⊗ I . (9.200)

Substitution into the omponent mass balan es then gives:


−1 ¯
c̄˙ := V̄ F̄r,i+o V̂
−1 ¯ c̄ .
P̄ c̄ + V̄ Fr,f V̂ (9.201)
i+o f f
9.1. PROCESSES 207

whi h with dening

x := c̄ (9.202)
u := c̄f (9.203)
y := x (9.204)
A := V̄
−1 ¯
F̄r,i+o V̂ P̄ (9.205)
i+o

B := V̄
−1 ¯
Fr,f V̂ (9.206)
f
C := I (9.207)
D := 0 (9.208)

gives the standard system's representation.

9.1.10.2.4 Model redu tion

Assuming that the ow in the left y le is mu h larger than the one in the right
and the ow through the pro ess, the ows Md := {a|b, b|c, c|f, f |g, g|h, h|i, i|a}
are being eliminated. For this purpose we split the in iden e matrix for the
pro ess into two parts, one that en loses the subnet formed by the fast ows,
denoted by Fd , and the remainder, denoted by Fr .

¯ := F̄ n̂
ṅ f
¯ + F̄ n̂
r
¯ (9.209)
 
:= Ff ⊗ I n̂ ¯ + F̄ n̂
r
¯ (9.210)

V̂k|z V̂z|d V̂d|l


k z := a+b+ +f+g+h+i d l

V̂d|c
V̂e|z
e

Figure 9.25: The redu ed mixing pro ess

To eliminate the fast ows, we seek multiply with a matrix Ω̄ hosen su h that
the produ t

Ω̄ F̄f =: 0 . (9.211)

This problem an be simplied to nding

Ω Ff =: 0 , (9.212)

whi h is a small er problem to solve. The redu tion matrix is then

Ω̄ := Ω ⊗ I , (9.213)
208 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

and the redu ed model is:


¯ := Ω̄ F̄ n̂
Ω̄ ṅ r
¯. (9.214)

From the graphi al representation it is easy to see that the ω simply adds those
nodes that are onne ted by the fast ows. In our example this results the gure
Figure 9.25.
Relabelling the graph and the same approa h as dis ussed above is applied to
obtain the ABCD representation.

9.1.10.3 Some simulation results


9.1. PROCESSES 209

−3 full model −3 full model


x 10 x 10
1.5 1.5

flow rates: flow rates:


cycle 1 : 5 cycle 1 : 50
1 1
cycle 2 : 2 cycle 2 : 2
inflow : 1 inflow : 1

0.5 0.5

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Figure 9.26: full model

−3 reduced model −3 reduced model


x 10 x 10
1.5 1.5

flow rates: flow rates:


cycle 1 : 5 cycle 1 : 50
1 1
cycle 2 : 2 cycle 2 : 2
inflow : 1 inflow : 1

0.5 0.5

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Figure 9.27: redu ed model

−3 compare average of lumped and recuced model −3 compare average of lumped and recuced model
x 10 x 10
1.5 1.5

flow rates: flow rates:


cycle 1 : 5 cycle 1 : 50
1 1
cycle 2 : 2 cycle 2 : 2
inflow : 1 inflow : 1

0.5 0.5

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Figure 9.28: omparison


210 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.2 Theory
9.2.1 Dierential Balan e (Shell Balan e)
9.2.1.1 Problem Denition

Derive the dierential balan e for a one-dimensional distributed system and a


generi onserved quantity x.

9.2.1.2 Solution

Let r denote the spatial o-ordinate then we draw up the balan e for a small
volume hara terised by the o-ordinate r and r + ∆r where ∆ denotes a small
in rement in r and the area at the lo ation A then the balan e reads:
dx
:= A δx̂|r − A δx̂|r+∆r . (9.215)
dt
Expanding the hange in the ux as a Taylor series, being trun ated after the
linear term:
∂δx̂
δx̂|r+∆r := δx̂|r + ∆r , (9.216)
∂r
one nds for the balan e:
 
dx ∂δx̂
:= A δx̂|r − A δx̂|r + ∆r , (9.217)
dt ∂r
∂δx̂
:= −A ∆r , (9.218)
∂r
∂δx̂
:= − ∆V , (9.219)
∂r
dx/∆V ∂δx̂
:= − . (9.220)
dt ∂r
Letting the volume approa h zero by letting the ∆r go to zero in the limit:
dx/∆V ∂δx̂
lim := − , (9.221)
∆V →0 dt ∂r
one nds:
∂δx(r, t) ∂δx̂ ∂π
:= − . (9.222)
∂t ∂π ∂r

So far nothing has been said about the transfer law. Taking the simplest one,
namely Equation (2.25) the left-hand-side be omes:
∂δx ∂ ∂π
:= λ , (9.223)
∂t ∂r ∂r
2
∂ π
:= λ 2 . (9.224)
∂r
9.2. THEORY 211

Thus we get:

∂δx(r, t) ∂2π
:= λ 2 . (9.225)
∂t ∂r
The time derivative of onserved extensive quantity, normed with the volume,
thus the respe tive density, is proportional to the se ond derivative of the driving
for e.

9.2.1.3 An Example: Fourier's Heat Diusion Equation

To go on: take for the onserved quantity the enthalpy H and for the transferred
quantity heat, then π := T and onsequently the transfer parameter λ is the
heat ondu tivity of the des ribed material. Assuming the spe i heat and the
density is onstant, the right-hand-side be omes:

∂T ∂2T
ρ cp := λ 2 . (9.226)
∂t ∂r
Whi h after re-arrangement be omes:

∂T λ ∂2T
:= , (9.227)
∂t ρ cp ∂r2
∂2T
:= α . (9.228)
∂r2
where α the heat diusivity.
212 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

9.2.2 Transfer Fun tions


To be ompleted
9.2. THEORY 213

9.2.3 Basi dynami systems


9.2.3.1 First-Order Single-Input-Single-Output System

Single-Input-Single-Output, appreviated with SISO, are systems that are s alar


at both ends, so-to-speak, whi h however does not imply that the state is s alar
as well.
The {A, B, C, D} representation of a generi SISO LTI system is
ẋ := Ax+ bu (9.229)
y := T
c x + du (9.230)

The single input is mapped onto the state with the ve tor b thereby taking the
rle of the matrix B and the state is mapped onto the single output by cT taking
the rle of the matrix C. The input a ting dire tly on the output is amplied
with the s alar d.
The transfer fun tion of the SISO is then:
cT |I s − A|−1 adj I s − A b + d (9.231)

g(s) :=

9.2.3.1.1 S alar-State Case

Making also the state s alar yields the stru turally simplest model one an
generate without eliminating one or the other system matri es. The most
ommon additional simpli ation is the ase where d is zero. The transfer
fun tion then onsists of s alar quantites only and reads:

g(s) := c (s − a)−1 b (9.232)


 −1
bc 1
:= − s+1 (9.233)
−a a

For stable system the a < 0, thus the time onstant − a1 and the steady-state
gain −a
bc
are positiv.
This rst-order sytem is used in various appli ations as a rst approximation
for a dynami behaviour. Applying it to the des ription of a physi al pro ess
requires nding two parameters, namely the steady-state gain and the time
onstant. The identi ation experiment must ex ite the system su iently dy-
nami in order to see the behaviour of the plant. Probably the most ommon
approa h, though not ne essarily the best one, is to inje t a step and extra t
the two parameters for the plant's input and the plant's response, alled step
response.
The tting is most ommonly done manually, meaning on a graph showing the
step input and the plant's response. How one an nd the two parameters from
the response is easy to nd from an analysis of the analyti al solution.
The solution in the time domain is:
Z t
y(t) := c exp {a t} x(o) + c exp {a θ} b u(t − θ) dθ (9.234)
0
214 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS

With u(t) being a step, and assuming that the plant is at zero state initially,
the expression simplies to
cb
y(t) := exp {a θ}|t0 u0 (9.235)
a
cb
y(t) := (exp {a t} − 1) u0 (9.236)
a
For stable plants, that is a < 0 the steady state gain is thus:
cb
k := (9.237)
−a
cb
:= (9.238)
|a|

The tangent at the start of the step response of magnitude u0 is:

c ẋ(t := 0) := c a x(0) + c b u0 (9.239)


:= c b u 0
(9.240)

The two asymtodes (tangent at zero and the tangent to the steady state at
innity) to the step response are thus:

cb 0
v(t) := u (9.241)
|a|
w(t) := c b u0 t (9.242)

and their interse tion t× :


cb 0
u := c b u0 t× (9.243)
|a|
1
t× := (9.244)
|a|

whi h is the time onstant τ . Interesting is also how far the pro ess has ome
after n times the time onstant:
   
cb 1
y (n τ ) := exp a n − 1 u0 (9.245)
a |a|
cb
:= (1 − exp {− n}) u0 (9.246)
|a|

For

n := 1 :: (1 − exp {− 1}) = 0.63 (9.247)


:= 5 :: (1 − exp {− 5}) = 0.99 (9.248)
9.2. THEORY 215

signal w(t)

c b u0
v(t) y(t)

u(t)

63 %
c b uo
uo |a|

time
1
τ := |a|

Figure 9.29: Step response for the rst-order s alar SISO with the
time onstant and the gain
216 CHAPTER 9. EXAMPLES, EXERCISES, ANSWERS
Bibliography
R A Alberty. Legendre transfors in hemi al thermodynami s. Chemi al Reviews, 94
(6):14571482, 1994. 28, 29
L Apostel. Towards the formal study of models in the non-formal s ien es from the
on ept and the role of the model in mathemati s and natural and so ial s ien es.
In H Freudenthal, editor, The on ept and the role of the model in mathemati s and
natural and so ial s ien es. D.Reidel Publishing Company, Dordre ht, The Nether-
lands, 1960. 21
R Aris. Mathemati al modelling te hniques. Pitman, London, 1978. 21
K J Astroem and P Eykho. System identi ation  a survey. Automati a, 7:123162,
1971. 107, 110, 122
Kendall E Atkinson. Numeri al analysis. Wiley, 1989. 67
R B Bird, W E Stewart, and E N Lightfood. Transport Phenomena. Wiley, London,
2001. 25, 41
G E P Box and G C Tiao. Bayesian inferen e in statisti al analysis. Addison Wesley,
1973. 123
G E P Box, W G Hunter, and Hunter J S. Statisti s for experiments  An introdu tion
to design, data analysis, and model building. John Wiley, New York, 1978. 132, 133
Chi-Tsong Chen. Linear System Theory and Design. Holt, Rinehart and Winston,
In , 1984. 71
R Courant, Friedri ks, and Lewy. Existen e of solutions for wave equations. Mathe-
matis he Annalen, 100:3274, 1928. 43
W M Deen. Analysis of Transport Phenomena. Topi s in Chemi al Engineering. Oxford
University Press, New York, Oxford, 1998. Ni e derivation of the onservation of
extensive quantity for a volume in a ow eld. 29, 41
K Denbigh and J C R Turner. Chemi al Rea tor TheoryAn Introdu tion. Cambridge
University Press, Cambridge, UK, 2nd edition, 1971. 26
Institut des S ien es Cognitives. English thesaurus. URL
http://di o.is . nrs.fr/en/index.html . 144
P Eykho. System Identi ation. Wiley, New York, 1974. 107, 110, 131
G C Goodwin and R L Payne. Dynami system identi ation: Experiment design and
data analysis. A ademi Press, 1977. 112
S R Groot and P Mazur. Non-Equilibrium Terhmodynami s. Dover Publi ations In ,
New Yori, 1983. 29
E A Guggenheim. Thermodynami sAn advan ed Treatment for Chemists and Physi-
ists. North-Holland Publishing Company, Amsterdam, The Netherlands, 5th edi-
tion, 1967. TUE EE library GCB 67 GUG. 29
F B Hildebrand. Introdu tion to numeri al analysis. M Graw-Hill, New York, 1956.
67
A H Jazwinski. Sto hasti Pro esses and Filter Theory. A ademi Press, New York,
1970. 128
Ja k Johnston and John DiNardo. E onometri methods. M Graw Hill, 1997. 123

217
218 BIBLIOGRAPHY

T Kailath. Linear Systems. Prenti e Hall, Englewook Clis, N J, Englewood Clis,


NJ, 1980. 71
R E Kalman. Mathemati al des ription of linear systems. So iety of Industrial and
Applied Mathemati s Journal of Control Series A, 1(2):152192, 1963. 71, 75
Karl-Rudolf Ko h. Introdu tion to Bayesian Statisti s. Springer, 2007. 123
P. V. Kokotovi , Jr. R. E. O'Malley, and P. Sannuti. Singular perturbations and order
redu tion in ontrol theory  an overview,. Automati a, 12(2):123132, 1976. URL
http://www.s ien edire t. om/s ien e/arti le/B6V21-47TG037-97/2/8 8fea34653fef0d48f6f fe5d40f
149
P.V. Kokotovi , Hassan K. Khalil, and John O'Reilly. Singular perturbation methods
in ontrol : analysis and design. SIAM, 1999. 153
C C Lin and L A Segel. Mathemati s Applied to Deterministi Problems in the Natural
S ien es. Siam Classi s in Applied Mathemati s, New York, 1988. 25, 43, 149
Lennart Ljung. System Identi ation Theory for the User. Prenti e Hall In . Engle-
wood Clis, New Jersey, 1987. 107, 110, 123, 124, 125, 127, 131
M Maletinsky. Identi ation of Continuous Dynami al Systems with Spline-Type Mod-
ulating Fun tion Method. PhD thesis, ETH, Zueri h, Switzerland, Diss ETH Nr
6206, 1978. 123
Wolfram MathWorld. Graphs. URL http://mathworld.wolfram. om/Graph.html .
144
M Modell and R Reid. Thermodynami s and its Appli ations. Prenti e Hall, Engle-
wood Clis, NJ, 1974. 29
H.A. Preisig and D.W.T. Rippin. Theory and appli ation of the modulating fun -
tion methodpart i review and theory of the method and theory of the spline-type
modulating fun tion method. Comp & Chem Eng, 17-1:116, 1993a. 123
H.A. Preisig and D.W.T. Rippin. Theory and appli ation of the modulating fun -
tion methodpart ii algebrai representation of maletinsky's spline-type modulating
fun tions. Comp & Chem Eng, 17-1:1728, 1993b. 123
Heinz A Preisig. On the Identi ation of Stru turally Simple Dynami Models for the
Energy Distribution in Stirred-Tank Rea tor Equipment. PhD thesis, ETH-Zueri h,
Switzerland, Diss ETH Nr 7616, 1984. 123
V. R. Saksena, J. O'Reilly, and P. V. Kokotovi . Singular per-
turbations and time-s ale methods in ontrol theory: Sur-
vey 1976-1983. Automati a, 20(3):273293, 1984. URL
http://www.s ien edire t. om/s ien e/arti le/B6V21-47WTFM8-D5/2/4 346f fa8b5 3de517f1719f5ad1
149
H R S hwarz. Numeri al analysis - a omprehensive introdu tion. Wiley, 1989. 67
F W Sears. A simplied simpli ation of aratheodory's treatment of thermodynami s.
Ameri an Journal of Physi s, 31(10):747752, 1963. Referen e to be olle ted. 25
L Tisza. Generalized Thermodynami s. MIT Press, Cambridge, Massa husetts, 1966.
29
various (AT&T). Graph viz. URL http://www.graphviz.org/ . 144
Kala hev L V Vasileva Adelaida B, Butzov Valentin F. The Boundary Fun tion Method
for Singular Perturbation Problems. SIAM Studies in Applied Mathemati s, 1995.
153
J A Wesselingh and R Krishna. Mass Transfer in Multi omponent Mixtures. Delft
University Press, Delft, The Netherlands, 2000. 29
Wikipedia. Graph theory. URL http://en.wikipedia.org/wiki/Graph_theory . 144
L A Zadeh. From ir uit theory to system theory. Pro eedings of IRE, 50, 1962. 71
Index
orthogonallity prin iple, 108

219

Anda mungkin juga menyukai