The present book follows the first semester analytic curriculum of the
course ’Mathematics for Engineers’, Faculty of Engineering in Foreign Lan-
guages (FILS), University Politehnica Bucharest, that is why it may seem a
little strange the combination of the two principal parts: Systems of Differ-
ential Equations (SDEs) and Complex Analysis.
The book intends to bring together all the necessary elements for a pro-
found understanding of the selected mathematical notions, both theory and
applications.
In the first five chapters, the interested reader finds the definitions of
the notions, properties, theorems with complete proof, suggestive examples,
completely solved exercises and exercises with or without a hint (solving
these exercises could be a good method to verify the proper understanding
of the material). The last chapter is just an invitation for the interested
reader to use mathematical specialized software like MATLAB or MAPLE
for practical applications of the notions presented in the previous chapters.
The book wants to be a convenient source for studying Linear, Homoge-
neous and Non-homogeneous SDEs, Complex differentiation and integration
of functions of a complex variable, Residues theory and its applications, in
order to eliminate the search on the Internet or numerous trips to the library.
Even if the present book is mainly addressed to the the students in the 1st
or 2nd year of study in an Engineering faculty, it may be a useful tool for the
students attending Master studies or the students who want to participate
in a student math competition, like ’Traian Lalescu’.
The authors would like to express warm thanks to the referees that have
read the material and contributed to its improvement and to all the col-
leagues for their precious suggestions and valuable observations. A very
special thanks goes to Laurenţiu Toader who designed a significant part of
the figures.
i
ii
Contents
Foreword i
3 Complex Differentiation 83
3.1 Analytic Functions . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 Harmonic Conjugates . . . . . . . . . . . . . . . . . . . . . . . 87
3.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . 88
iii
3.4 Determination of Analytic Functions . . . . . . . . . . . . . . 89
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6 SOFTWARE 171
6.1 Stability with MATLAB . . . . . . . . . . . . . . . . . . . . . 171
6.2 MATLAB and MAPLE Commands for Complex Numbers and
for Functions of a Complex Variable . . . . . . . . . . . . . . . 178
6.2.1 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.2.2 MAPLE . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Bibliography 193
Index 197
iv
Chapter 1
Systems of Differential
Equations (SDEs)
Józef Hoene Wronski, Thomas Muir [22] or Giuseppe Peano [26] are just
a few of a large number of mathematicians that developed this important
branch of mathematics. About the life and the mathematical contributions
of Wronksi, one should consult [30] (translated from Polish).
The definition of a stable SDE is due to Henri Poincaré [28], [29] (in
French) and Alexandr Lyapunov [17] (translated from Russian into French
by Édourd Davaux and from French into English by A. T. Fuller). They
noticed that the system is stable only after the initial conditions are losing
their influence.
1
1.1 Linear and Homogenous SDEs
1.1.1 First Order Linear SDEs
A linear system of differential equations (for short, a SDEs) has the nor-
mal form
0
y1 = a11 (x)y1 + a12 (x)y2 + · · · + a1n (x)yn + b1 (x)
y 0 = a (x)y + a (x)y + · · · + a (x)y + b (x)
2 21 1 22 2 2n n 2
(1.1)
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
0
yn = an1 (x)y1 + an2 (x)y2 + · · · + ann (x)yn + bn (x)
Assume that aij : I → R and bi : I → R are continuous functions on an
interval I ⊂ R, ∀i, j = 1, n.
The system is said to be homogeneous if bi ≡ 0, ∀i = 1, n and it will be
denoted (1.10 ).
Let us consider the matrix A(x) and the vectors Y (x) and B(x),
a11 (x) a12 (x) · · · a1n (x)
a21 (x) a22 (x) · · · a2n (x)
A(x) = ···
,
··· ··· ···
an1 (x) an2 (x) · · · ann (x)
y1 (x) b1 (x)
y2 (x) b2 (x)
Y (x) =
· · · , B(x) =
.
···
yn (x) bn (x)
Then the system (1.1) can be written in the form
2
An initial condition is an equality of the form
Y (x0 ) = Y0 , (1.3)
ii) if A and B are continuous on I, then the initial value problems (1.1)
and (1.10 ) have unique solutions.
Proof. An idea for the proof in the case (1.10 ) is to consider the sequence of
the vector functions (Yk )k∈N , defined by
Z x
Y0 (x) = Y0 , Yk (x) = Y0 + A(t)Yk−1 (t)dt, k ≥ 1.
x0
3
Now, if Y1 and Y2 are two solutions of (1.10 ) with Y1 (x0 ) = Y2 (x0 ), then
Y (x) = Y1 (x) − Y2 (x) verifies Y (x0 ) = 0 and
Z x
kY (x)k = A(t) [Y1 (t) − Y2 (t)] dt .
x0
Using some elementary computations and evaluations, one can prove that
kY (x)k is 0, so the solution is unique.
Proof. Let Yi , i = 1, k solutions of (1.10 ), i.e. Yi0 = AYi , and some real
constants Ci , i = 1, k.
X k
Consider the linear combination Y = Ci Yi . Then
i=1
k k k
!
X X X
Y0 = Ci Yi0 = Ci AYi = A C i Yi = AY,
i=1 i=1 i=1
4
Conversely, if we have that Y1 (x), Y2 (x), . . . , Yk (x) are linearly indepen-
dent and C1 Y1 (x0 ) + C2 Y2 (x0 ) + · · · + Ck Yk (x0 ) = 0, it follows by Proposition
1.1.2 that Y = C1 Y1 + C2 Y2 + · · · + Ck Yk is a solution of (1.10 ) which ver-
ifies Y (x0 ) = 0. But the null function Z(x) ≡ 0 obviously verifies (1.10 )
and Z(x0 ) = 0. By the Existence and Uniqueness Theorem (see Theorem
1.1.1) we have that Y (x) ≡ Z(x) ≡ 0, i.e. C1 Y1 (x) + C2 Y2 (x) + · · · +
Ck Yk (x) = 0, ∀x ∈ I, hence C1 = C2 = . . . = Ck = 0. It follows that
Y1 (x0 ), Y2 (x0 ), . . . Yk (x0 ) are linearly independent.
5
has only the trivial solution C1 = C2 = · · · = Cn = 0.
This is equivalent to the fact that, for any x ∈ I, the determinant of the
system (1.5) is different from 0, i.e. W (x) 6= 0, ∀x ∈ I.
6
In this case, the general solution of the SDEs can be obtained by using
the eigenvalues, the eigenvectors and the generalized eigenvectors of A.
Let us recall the necessary notions from Linear Algebra.
The characteristic polynomial of the matrix A is det(sIn − A). The roots
λ ∈ C of the characteristic polynomial are called the eigenvalues of A.
The set of the eigenvalues of A is called the spectrum of A and it is
denoted by σ(A).
A vector v ∈ Cn is an eigenvector of A, that corresponds to the eigenvalue
λ if
Av = λv and v 6= 0. (1.9)
For an eigenvalue λ, the set formed by the null-vector and by all corre-
sponding eigenvectors is called the eigenspace of λ and it is denoted Sλ (A).
The geometric multiplicity of λ, denoted by mg (λ), is the dimension of Sλ (A)
(hence mg (λ) is the maximum number of linearly independent eigenvectors
which correspond to λ). The algebraic multiplicity of λ, ma (λ), is the multi-
plicity of λ as a root of det(sIn − A). Obviously, ma (λ) ≥ mg (λ), for every
eigenvalue λ.
If ma (λ) = mg (λ), let us denote this number by m, hence λ occupies m
places in σ(A) and there are m eigenvectors v1 , v2 , · · ·, vm that verify
7
If, for some λ ∈ σ(A), mg (λ) < ma (λ), in order to construct a new basis
B, the mg (λ) linearly independent eigenvectors are completed to a num-
ber of ma (λ) vectors by chains of generalized eigenvectors. Such a chain
{v1 , v2 , . . . , vk } starts with an eigenvector, let us say v1 , and verifies the re-
lations
Av1 = λv1 , Av2 = λv2 + v1 , . . . , Avi = λvi + vi−1 , . . . , Avk = λvk + vk−1 .
(1.11)
The vectors v2 , . . . , vk are also called principal vectors. Then, by the above
construction, due to (1.10), this chain will provide, on the block diagonal of
the matrix A,
e a Jordan cell of dimension k:
λ 1 0 ··· 0 0
0
λ 1 ··· 0 0
J = · · · · · · · · · · · · · · · · · · .
0 0 0 ··· λ 1
0 0 0 ··· 0 λ
Now, let us determine the general solution of the linear homogeneous SDE
given by (1.8).
We are going to differentiate between two cases; whether the matrix A is
diagonalizable or not.
Case I : The matrix A is diagonalizable.
By the previous discussion, this happens if and only if mg (λ) = ma (λ),
for every λ ∈ σ(A), hence the matrix A has n linearly independent eigen-
vectors v1 , v2 , . . . , vi , . . . , vn , which correspond to the eigenvalues (including
multiplicities) λ1 , λ2 , . . . , λi , . . . , λn , i.e., by (1.9),
Avi = λi vi , vi 6= 0, ∀i = 1, n. (1.12)
8
Let us introduce the vector function Yi (x) = vi eλi x in both members of
(1.8). We first have that
0
Yi0 (x) = vi eλi x = vi λi eλi x ,
and, using (1.12), we obtain that
AYi (x) = A vi eλi x = (Avi ) eλi x = λi vi eλi x .
Therefore, Yi0 (x) = AYi (x), hence Yi (x) = vi eλi x is a solution of the system
(1.8), ∀i = 1, n.
The Wronskian of these solutions is
W (x) = det [Y1 (x), Y2 (x), . . . , Yn (x)]
= det v1 eλ1 x , v2 eλ2 x , . . . , vn eλn x
Using (1.12), (1.13) and (1.7) we can summarize the method of solving a
linear homogeneous SDE.
Algorithm 1.1.7 (The diagonalizable case).
9
Remark 1.1.8. Since mg (λ) = ma (λ), ∀λ ∈ σ(A), to any λ correspond a
number of linearly independent eigenvectors equal to the multiplicity of λ in
the spectrum of A, hence the total number of linearly independent eigenvectors
is the number n (which is the dimension of the matrix A).
Remark 1.1.9. If one considers the basis matrix T = [v1 , v2 , . . . , vn ], where
e = T −1 AT
v1 , v2 , . . . , vn are linearly independent eigenvectors, the matrix A
is a diagonal matrix with the eigenvalues λi on the main diagonal,
λ1 0 · · · 0
e = 0 λ2 · · · 0 .
A ··· ··· ··· ···
0 0 · · · λn
Av1 = λv1 , v1 6= 0,
Av2 = λv2 + v1 ,
·················· ,
(1.15)
Avk = λvk + vk−1 ,
·················· ,
Avm = λvm + vm−1 .
We have the following result.
Proposition 1.1.10. The following vector functions corresponding to the
chain (1.15)
Y1 (x) = v1 eλx ,
x
Y2 (x) = v1 + v2 eλx ,
1!2
x x
Y3 (x) = v1 + v2 + v3 eλx , (1.16)
2! 1!
······························ ,
xk−1 xk−2 x
Yk (x) = v1 + v2 + · · · + vk−1 + vk eλx , k ≤ m.
(k − 1)! (k − 2)! 1!
10
are linearly independent solutions of (1.8).
xk−2 xk−3
1
Yk0 (x) = v1 + v2 + · · · + vk−1 eλx +
(k − 2)! (k − 3)! 1!
k−1 k−2
x x x
+ v1 + v2 + · · · + vk−1 + vk λeλx .
(k − 1)! (k − 2)! 1!
xk−1
x
AYk (x) = Av1 + · · · + Avk−1 + Avk eλx
(k − 1)! 1!
k−1
x x
= λv1 + · · · + (λvk−1 + vk−2 ) + (λvk + vk−1 ) eλx ,
(k − 1)! 1!
The general solution of the SDE (1.8) can be obtained in this case by the
following algorithm, in which an eigenvector without generalized eigenvectors
will be considered as a chain of length 1.
11
Algorithm 1.1.11 (The general case).
12
Theorem 1.1.12. Let λ be an eigenvalue of the matrix A.
ii) If ma (λ) > mg (λ), then there exist some solutions Y (x) whose compo-
nents contain functions of the form xj eλx , where j ≥ 1.
Theorem 1.2.2. The fundamental matrix Φ(x, x0 ) has the following prop-
erties:
d
i) Φ(x, x0 ) = A(x)Φ(x, x0 ), ∀x, x0 ∈ I;
dx
13
ii) Φ(x0 , x0 ) = In , ∀x0 ∈ I;
A(x − x0 ) A2 (x − x0 )2 Ak (x − x0 )k
eA(x−x0 ) = In + + + ··· + + ··· .
1! 2! k!
Proof. i) The derivative of a matrix is the matrix of the derivatives of all its
entries. Therefore, the derivative can be calculated for instance, by taking
the derivatives of all its columns Yi (x, x0 ) and by using (1.10 ).
d
(Φ(x, x0 )) = [Y10 (x, x0 ), Y20 (x, x0 ), . . . , Yn0 (x, x0 )]
dx
= [A(x)Y1 (x, x0 ), A(x)Y2 (x, x0 ), . . . , A(x)Yn (x, x0 )]
= A(x)[Y1 (x, x0 ), Y2 (x, x0 ), . . . , Yn (x, x0 )]
= A(x)Φ(x, x0 ).
1
The integral of a matrix A is the matrix whose entries are the integrals of the entries
of A.
14
ii) By the definition of Φ(x, x0 ) (see Definition 1.2.1), one obtains that
iii) Introduce Ye (x) = Φ(x, x0 )Y0 in (1.10 ) and in (1.3) and use i) and ii).
We will get that
d
Ye 0 (x) = (Φ(x, x0 )) Y0 = (A(x)Φ(x, x0 )) Y0
dx
= A(x) (Φ(x, x0 )Y0 ) = A(x)Ye (x),
Cn
constants, then Yn (x) = Φ(x, x0 )C is the general solution of SDE (1.10 ).
Remark 1.2.4. If a matrix M (x, x0 ) ∈ Rn×n has the properties i) and ii),
then Φ(x, x0 ) = M (x, x0 ).
d
Proof. Indeed, if M (x, x0 ) = A(x)M (x, x0 ) and M (x0 , x0 ) = In , then,
dx
by using the proof of iii), the vector function Yb (x) = M (x, x0 )Y0 is a solution
of initial value problem (1.10 ), (1.3). But, by the Existence and Uniqueness
Theorem (see Theorem 1.1.1), the solution of the initial value problem is
unique, hence Φ(x, x0 )Y0 = M (x, x0 ), ∀Y0 ∈ Rn , i.e. Φ(x, x0 ) = M (x, x0 ).
15
Now we go back to the proof of Theorem 1.2.2 and we continue with the
properties iv) to vii).
iv) Consider the moments x0 , x1 and x and the corresponding values of
the solution of the initial value problem (1.10 ), (1.3), respectively Y (x0 ) = Y0 ,
Y (x1 ) and Y (x) (Figure 1.1).
Figure 1.1: The Values of the Solution of the Initial Value Problem
It follows that Y (x) = Φ(x, x1 )Φ(x1 , x0 )Y0 , but again the solution is unique,
hence
Φ(x, x0 )Y0 = Φ(x, x1 )Φ(x1 , x0 )Y0 , ∀Y0 ∈ Rn .
Therefore Φ(x, x0 ) = Φ(x, x1 )Φ(x1 , x0 ).
v) A matrix D is the inverse of C ∈ Rn×n if and only if DC = In . Let us
consider C = Φ(x, x0 ) and D = Φ(x0 , x). By iv) and ii), one obtains that
16
one obtains that
Z x
d
(M (x, x0 )) = On + A(x) + A(x) A(t2 )dt2 + · · · +
dx x0
Z x Z t2 Z tk−1
+ A(x) A(t2 ) A(t3 ) · · · A(tk )dtk · · · dt2 + · · ·
x0 x0 x0
Z x
= A(x)[In + A(t2 )dt2 + · · · +
x0
Z x Z t2 Z tk−1
+ A(t2 ) A(t3 ) · · · A(tk )dtk · · · dt2 + · · · ]
x0 x0 x0
= A(x)M (x, x0 )
and
Z x0 Z x0 Z t1
M (x0 , x0 ) = In + A(t1 )dt1 + A(t1 ) A(t2 )dt2 dt1 + · · ·
x0 x0 x0
= In + On + On + · · · = In .
Hence M (x0 , x0 ) has the properties i) and ii) and by Remark 1.2.4, we
have that
Φ(x, x0 ) = M (x, x0 ),
which is equivalent to the fact that Φ(x, x0 ) is the sum of the series (1.18).
So the Peano-Baker formula is proved.
vii) If A is a constant matrix, the Peano-Baker formula (1.18) becomes
Z x Z x Z t1
2
Φ(x, x0 ) = In + A dt1 + A dt2 dt1 + · · · +
x0 x0 x0
Z x Z t1 Z tk−1
k (1.19)
+A ··· dtk · · · dt2 dt1 + · · · .
x x x0
| 0 0 {z }
k integrals
In the following we are going to evaluate the above integrals:
Z x x x − x0
dt1 = t1 = ,
x0 x0 1!
Z x Z t1 Z x Z t1 Z x
(t1 − x0 )2 x
t1 − x0
dt2 dt1 = dt2 dt1 = dt1 =
1! 2 · 1! x0
x0 x0 x0 x0 x0
(x − x0 )2 (x0 − x0 )2 (x − x0 )2
= − = .
2 · 1! 2 · 1! 2!
17
We will finish the proof of vii) by induction. Let us assume that such a
(x − x0 )k−1
multiple integral with k − 1 integrals has the value . Then we get
(k − 1)!
that
Z x Z t1 Z tk−1 Z x Z t1 Z tk−1
... dtk . . . dt2 dt1 = ... dtk . . . dt2 dt1
x0 x0 x0 x0 x0 x0
Z x
(t1 − x0 )k−1
= dt1
x0 (k − 1)!
(t1 − x0 )k x (x − x0 )k
= = ,
k(k − 1)! x0 k!
hence this evaluation is true for k ≥ 1. By (1.19), we obtain that the funda-
mental matrix is
A(x − x0 ) A2 (x − x0 )2 Ak (x − x0 )k
Φ(x, x0 ) = In + + + ··· + + ···
1! 2! k!
= eA(x−x0 ) .
18
hence Y (x) verifies the SDE (1.1).
Now, let Y be an arbitrary solution of (1.1) and let us consider the vector
function Y − Yp . Then
Yh = C 1 Y1 + C 2 Y2 + · · · + C n Yn ,
where C1 (x), C2 (x), . . . , Cn (x) are functions of class C 1 (I). This represents
the so called variation of parameters.
Let us introduce the vector function (1.21) in the system (1.1). We will
use the fact that Yi0 (x) = A(x)Yi (x), ∀i = 1, n. We have that
19
On the other hand,
20
and write the general solution of SDE (1.10 ),
Yh (x) = Φ(x, x0 )C,
C1
C2
where C = · · · is an arbitrary constant n-vector (see Remark 1.2.3).
Cn
We apply the variation of parameters method and we look for a solution
of (1.1) of the form
Yp (x) = Φ(x, x0 )C(x), (1.24)
T
where C(x) = C1 (x) C2 (x) · · · C2 (x) is a C 1 (I) vector function.
We introduce Yp (x) (1.24) in the system (1.1) and we use property i) from
Theorem 1.2.2. Then
0 d
Yp (x) = Φ(x, x0 ) C(x) + Φ(x, x0 )C 0 (x)
dx
= A(x)Φ(x, x0 )C(x) + Φ(x, x0 )C 0 (x),
and
A(x)Yp (x) + B(x) = A(x)Φ(x, x0 )C(x) + B(x).
From Yp0 (x) = A(x)Yp (x) + B(x), one obtains the algebraic system
Φ(x, x0 )C 0 (x) = B(x), (1.25)
where, by property v), Theorem 1.2.2, the fundamental matrix Φ(x, x0 ) is
nonsingular and Φ(x, x0 )−1 = Φ(x0 , x);
[Stage 2 ]: Solve the system (1.25) by multiplying it with Φ(x, x0 )−1 =
Φ(x0 , x). Therefore
C 0 (x) = Φ(x0 , x)B(x); (1.26)
[Stage 3 ]: Integrate (1.26) on an interval [x0 , x] ⊂ I. So
Z x
C(x) = Φ(x0 , t)B(t)dt;
x0
[Stage 4 ]: Replace C(x) in (1.24) and use property iv) in Theorem 1.2.2
(the semigroup property). One obtains the particular solution of system (1.1)
as being
Z x Z x
Yp (x) = Φ(x, x0 ) Φ(x0 , t)B(t)dt = Φ(x, t)B(t)dt;
x0 x0
21
[Stage 5 ]: Write the general solution of (1.1), Y = Yh + Yp , hence
Z x
Y (x) = Φ(x, x0 )C + Φ(x, t)B(t)dt. (1.27)
x0
Now, let us consider the initial value problem (1.1) with the initial con-
dition (1.3). By replacing x = x0 in (1.27), one gets
Z x0
Y0 = Y (x0 ) = Φ(x0 , x0 )C + Φ(x, t)B(t)dt = In C = C,
x0
22
1.2.3 Linear Control Systems
A linear continuous-time control system Σ has the following state space
representation:
ẋ(t) = A(t)x(t) + B(t)u(t) the state equation (1.30)
Σ
y(t) = C(t)x(t) + D(t)u(t) the output equation (1.31)
Here A(t), B(t), C(t), D(t) are respectively n × n, n × m, p × n, p × m
continuous real matrices, x(t) ∈ Rn , u(t) ∈ Rm and y(t) ∈ Rp are respectively
the state, input (or control ) and output of the system Σ and t ∈ R is the
time.
Consider the initial condition x(t0 ) = x0 , for an initial moment t0 and an
initial state x0 . Let Φ(t, t0 ) be the fundamental matrix of the matrix A(t).
The variation of parameters formula (1.28) gives (with Y (x), Y0 , B(x)
replaced by respectively x(t), x0 , B(t)u(t)) the formula of the state of the
system Σ at the moment t
Z t
x(t) = Φ(t, t0 )x0 + Φ(t, s)B(s)u(s)ds. (1.32)
t0
By replacing the state x(t) given by (1.32) in the output equation (1.31),
one obtains the input-output map (or the general response) of the system Σ
Z t
y(t) = C(t)Φ(t, t0 )x0 + C(t)Φ(t, s)B(s)u(s)ds + D(t)u(t). (1.33)
t0
1.3 Stability
1.3.1 Asymptotic stability
Consider the initial value problem
Σ : Y 0 (x) = AY (x), Y (0) = Y0 , (1.36)
23
where A is a constant n × n matrix.
By Theorem 1.2.2 iii) and vii), the solution of the initial value problem
(1.36) is
Y (x) = eAx Y0 . (1.37)
Definition 1.3.1. The system is said to be stable (or the zero solution is
stable) if for any > 0 there exists δ > 0 such that, for any Y0 ∈ Rn with
kY0 k < δ, the corresponding solution verifies kY (x)k < , for any x ≥ x0 .
If the system is not stable, it is called unstable.
The system is said to be asymptotically stable if it is stable and for any
Y0 ∈ Rn , lim Y (t) = 0.
t→∞
24
The next theorem establishes the conditions for the possible situations
concerning the stability of the linear homogeneous SDEs with constant coe-
fficients (the LTI systems).
Proof. Let
J1
..
.
 = Ji
...
Js
be the Jordan canonical form of the matrix A. This means that there exists a
nonsingular (transition) matrix T such that A = T ÂT −1 , Â has the structure
above and Ji are Jordan cells,
λi 1
..
. 1
Ji = . .
,
λi .
..
. 1
λi
where Ji ∈ Mmi , with mi > 1 if ma (λi ) > mg (λi ) and Ji = [λi ], if ma (λi ) =
mg (λi ).
25
In this case, the solution is Y (t) = eAx Y0 =T eAx T −1 Y0 and
J1 x
e
..
.
Ax Ji x
e = e ,
b
. ..
Js x
e
with
x2 xmi −1
x
1 1! 2! (mi −1)!
x .. xmi −2
1 1!
. (mi −2)!
eJi x = eλi x .. x2
,
1 . 2!
x
1!
1
if ma (λi ) > mg (λi ) and eJi x = eλi x , if ma (λi ) = mg (λi ).
Then, using the equality |ez | = eRe (z) , one obtains:
a) The system Σ is asymptotically stable if and only if
lim Y (x) = 0 ∈ Rn
x→∞
which is equivalent to
lim xk eλi x = 0, ∀i = 1, s and ∀k = 0, mi − 1.
x→∞
hence there exist solutions Y (x) with components which verify lim yi (x) =
x→∞
∞, therefore the system is unstable.
c) For the eigenvalues λ of the matrix A with negative real parts, we get
that
lim |xk eλx | = lim xk eRe (λ)x = 0,
x→∞ x→∞
26
hence the functions xk eλx are bounded, and for the eigenvalues which have
null real part, λ = iβ, and ma (λ) = mg (λ), the corresponding terms in the
solutions Y (x) are of the form eiβx , hence |eiβx | = 1. Therefore, the entries of
the exponential matrix eAx remain bounded, so keAx k ≤ M , for some M > 0
and the solution verifies kY (x)k ≤ M kY0 k. For an arbitrary chosen , take
δ = M . Then, for any Y0 with kY0 k < δ, we have that
kY (x)k ≤ M · = ,
M
hence kY0 k < , i.e. the system Σ is stable. But the system Σ is not
asymptotically stable since
lim |eiβx | = 1 6= 0.
x→∞
27
Let us denote by N − , N + , N 0 the number of roots λ of p(z) respectively
with Re λ < 0, Re λ > 0 and Re λ = 0.
Hurwitz proved that if ∆i 6= 0, ∀i = 1, n, then N 0 = 0 and N + is equal
to the number of the sign changes (denoted by Nsc ) in the following finite
sequence:
∆2 ∆n
1, ∆1 , ,..., .
∆1 ∆n−1
Using this, one obtains the following result:
Theorem 1.3.4 (Routh-Hurwitz Criterion). The system Σ is asymptotically
stable if and only if the characteristic polynomial of the matrix A is a Hurwitz
polynomial.
Proof. Let p(z) = an z n + an−1 z n−1 + · · · + a0 be the characteristic polynomial
of the matrix A, where an > 0. If ∆1 > 0, ∆2 > 0, . . ., ∆n > 0, then N 0 = 0
and since 1 > 0, we have that Nsc = 0. Therefore, N + = 0 and all the
eigenvalues of the matrix A have negative real parts. According to Theorem
1.3.2, the system Σ is asymptotically stable.
Conversely, if the system Σ is asymptotically stable, then N + = 0, hence
∆2 ∆3 ∆n
1 > 0, ∆1 > 0, > 0, > 0, . . ., > 0, thus ∆j > 0, for every
∆1 ∆2 ∆n−1
j ∈ {1, 2, . . . , n}.
Remark 1.3.5. If there exists a sign change in the sequence of the coeffi-
cients of p(z): an , an−1 , . . . , a1 , a0 (i.e. the entries on the main diagonal of
the matrix Hn ), then p(z) has a real positive root, hence it is not a Hurwitz
polynomial.
28
the matrix P is a real matrix, then this condition is equivalent to P = P T
and ν T P ν > 0, ∀ν ∈ Rn , ν 6= 0. For a Hermitian/symmetric matrix P , the
following statements are equivalent:
i) P is positive definite;
ii) all eigenvalues of P are positive real numbers;
iii) all principal minors of P are positive.
Theorem 1.3.6. The system Σ is asymptotically stable if and only if the
algebraic equation (called the Lyapunov equation)
AT P + P A = −Q (1.39)
has a real, symmetrical, positive definite matrix P , for a real, symmetric,
positive definite matrix Q.
Proof. Necessity. Assume that the system is asymptotically stable. For
any Q > 0, let us consider the matrix
Z ∞
T
P = eA x QeAx dx. (1.40)
0
AT x
The entries of the matrix e QeAx are linear combinations of functions
of the form xk eλx , where k ≥ 0, λ = λi + λj , λi , λj ∈ σ(A) (see the proof
of Theorem 1.1.12). Since A is a stable matrix, we have that Re (λi ) < 0,
∀λi ∈ σ(A). Then the improper integral (1.40) is convergent due to the fact
that its entries are linear combinations of convergent integrals of the form
Z ∞
1 k!
xk eλx dx = k
Γ(k + 1) = .
0 (−λ) (−λ)k
T
Moreover, lim eA x QeAx = 0n , since lim xk eλx = 0. It follows that
x→∞ x→∞
Z ∞ Z ∞
T T AT x Ax T
A P + PA = A e Qe dx + eA x QeAx Adx
Z0 ∞ 0
d AT x
Ax AT x d Ax
= e Q e + e Q (e ) dx
0 dx dx
Z ∞
d T
= eA x QeAx dx
0 dx
T T
= eA x QeAx x→∞ − eA x QeAx x=0
= −Q,
29
since eA0 = In .
Obviously P T = P , i.e. P is symmetric. For any Y ∈ Rn , Y = 6 0,
Ax Ax
it follows Zthat Z = e Y 6= 0 (since the matrix e is nonsingular), then
∞
Y TPY = Z T QZdx 6= 0, therefore P > 0.
0
Sufficiency. Let us assume that there exists a matrix P > 0 such that
AT P + P A = −Q for some matrix Q > 0.
Let λ be an eigenvalue of A and consider v to be an eigenvector corre-
sponding to the eigenvalue λ, i.e. Av = λv and v 6= 0. Then Av = λv,
hence
v T AT = λv T ,
i.e. v ∗ A = λv ∗ since the matrix A is real.
We obtain that
which implies that 2Re (λ) = λ + λ < 0 (since P > 0). In conclusion we have
that Re (λ) < 0, hence the system Σ is asymptotically stable.
i) V (Y ) is positive definite;
d T
V̇ (Y (x)) = (Y (x)P Y (x))
dt
= Ẏ T (x)P Y (x) + Y T (x)P Ẏ (x)
= −Y T (x)QY (x) < 0,
30
Remark 1.3.8. If the matrix Q in Theorem 1.3.6 is only positive semi-
definite, i.e. v T Qv ≥ 0, ∀v ∈ Rn , then V̇ (Y (x)) ≤ 0 in Corollary 1.3.7, so
the system (1.36) is only stable.
One can show that the eigenvalues of the matrix L are µij = λi + λj ,
where λi , λj ∈ σ(A), i, j = 1, n, hence µi,j 6= 0 since Re (λi ) < 0, for any
λi ∈ σ(A), and L is nonsingular. Therefore, 1.41 has a unique solution for
all Q > 0.
Y 0 = f (Y ), f (0) = 0. (1.42)
Definition 1.3.10. The system (1.42) is said to be stable (or the zero so-
lution is stable) if for any > 0 there exists δ > 0 such that, for any Y0 ∈ Rn
with kY0 k < δ, the corresponding solution verifies kY (x)k < , for any x ≥ x0 .
If the system is not stable, it is called unstable.
31
The system (1.42) is called asymptotically stable in a neighborhood N of
the origin if it is stable and for any Y0 ∈ N , the corresponding solution Y (x)
verifies lim Y (x) = 0.
x→∞
If N is a bounded set, the asymptotic stability is local and if N =Rn it is
global .
There exist a connection between the stability of the system (1.42) and
the stability of its linearized system with respect to the equilibrium position.
One applies Taylor’s formula in a neighborhood of the origin. Let us de-
note by f 0 (Y ) the Jacobian matrix of the vector function f = [f1 , f2 , . . . , fn ]T ,
where fi = fi (Y1 , Y2 . . . , Yn ). So,
∂f1 ∂f1 ∂f1
∂Y1 ∂Y2 · · · ∂Yn
∂f ∂f2 ∂f2
2
···
f 0 (Y ) = ∂Y1 ∂Y2 ∂Y .
· · · · · · · · · · · ·n
∂fn ∂fn ∂fn
···
∂Y1 ∂Y2 ∂Yn
One says that a real function h is of order o(x) and one writes h(x) = o(x)
h(x)
if lim = 0. Then, the Taylor formula of f near 0 can be written as
x→0 x
32
Lyapunov’s Second Method
33
1.4 Exercises
E 1. Determine the general solution of the following SDEs, Y 0 = AY , (the
matrix A is diagonalizable), if
−3 1 1 2 1 1 3 1 0
a) A = 1 −3 1 ; b) A = 2 3 2 ; c) A = 0 3 1 .
1 1 −1 3 3 4 0 −1 3
Solution. a) [Stage 1 ]: Determine the eigenvalues of the matrix A.
−3 − λ 1 1
det(A − λI3 ) = 0 ⇔ 1 −3 − λ 1 = 0.
1 1 −1 − λ
One can expand this determinant using the triangle rule, or the Sarrus
rule or the ’zero’ rule. It is advisable to use the last one.
By subtraction of column 2 from column 1, one obtains
−4 − λ 1 1
4 + λ −3 − λ 1 = 0,
0 1 −1 − λ
So we have that
det(A − λI3 ) = 0 ⇔ −(4 + λ) (2 + λ)(1 + λ) − 2 = 0
⇔ (4 + λ)(λ2 + 3λ) = 0.
34
[Stage 2 ]: Determine the corresponding eigenvectors for every eigenvalue
λ.
1 1 1 a 0
1. (A − λ1 I3 )v = 0 ⇔ 1 1 1 b = 0 . This is equivalent to
1 1 3 c 0
the system (
a+b+c=0
,
a + b + 3c = 0
−1
hence c = 0, a = −b and, for b = 1, 1 is an eigenvector corresponding
0
to λ1 = −4.
−3 1 1 a 0
2. (A−λ2 I3 )v = 0 ⇔ 1 −3 1 b = 0 . This is equivalent
1 1 −1 c 0
to the system
−3a + b + c = 0
a − 3b + c = 0 ,
a+b−c=0
1
hence c = 2b, a = b and, for b = 1, 1 is an eigenvector corresponding to
2
λ2 = 0.
0 1 1 a 0
3. (A − λ3 I3 )v = 0 ⇔ 1 0 1 b = 0 . Again we obtain a
1 1 2 c 0
system and in this case we have
b + c = 0
a+c=0 ,
a + b + 2c = 0
−1
hence b = −c, a = −c and, for c = 1, n − 1 is an eigenvector corre-
1
sponding to λ3 = −3.
35
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
−1 1 −1
Y1 = e−4x 1 , Y2 = 1 , Y3 = e−3x −1
0 2 1
Y = C1 Y1 + C2 Y2 + C3 Y3
−1 1 −1
= C1 e−4x 1 + C2 1 + C3 e−3x −1 ,
0 2 1
Again we can expand this determinant using the triangle rule, or the
Sarrus rule or the ’zero’ rule. It is advisable to use the last one.
By multiplying by −1 column 2 and adding it to column 2, one obtains
2−λ 1 0
2
3 − λ −1 + λ = 0,
3 3 1−λ
36
So we have that
det(A − λI3 ) = 0 ⇔ (1 − λ) (2 − λ)(6 − λ) − 5 = 0
⇔ (1 − λ)(λ2 − 8λ + 7) = 0.
a + b + c = 0,
which is a system
with
three unknowns
and only one equation, hence a =
a −b − c −1 −1
−b − c. Since b =
b =b 1 +c
0 , it follows that the
c c 0 1
−1 −1
geometric multiplicity of λ1 is 2 and 1 , 0 are linearly indepen-
0 1
dent eigenvectors corresponding to λ1 = 1 (for b = 1, c = 0 and b = 0, c = 1
respectively).
−5 1 1 a 0
2. (A−λ2 I3 )v = 0 ⇔ 2 −4 2 b = 0 . This is equivalent
3 3 −3 c 0
to the system
−5a + b + c = 0
2a − 4b + 2c = 0 ,
3a + 3b − 3c = 0
1
hence c = 2a, b = 2a and, for a = 1, 2 is an eigenvector corresponding
3
to λ2 = 7.
37
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
−1 −1 1
x x 7x
Y1 = e 1 , Y2 = e 0 , Y3 = e 2
0 1 3
Y = C 1 Y1 + C 2 Y2 + C 3 Y3
−1 −1 1
= C1 ex 1 + C2 ex 0 + C3 e7x 2 ,
0 1 3
We obtain three distinct eigenvalues, but one is real λ1 = 3 and the other
two are conjugated complex roots: λ2 = 3+i and λ3 = 3−i. Thus ma (λ) = 1,
for every eigenvalue λ.
[Stage 2 ]: Determine the corresponding eigenvectors for every eigenvalue
λ.
0 1 0 a 0
1. (A − λ1 I3 )v = 0 ⇔ 0 0 1
b = 0 . This is equivalent
0 −1 0 c 0
to the system (
b=0
,
c=0
1
hence a ∈ R and b = c = 0. Thus 0 is an eigenvector corresponding to
0
the real eigenvalue λ1 = 3 (for a = 1).
2. In case of the complex eigenvalues, since there are conjugated, it is not
necessary to use both.
38
−i 1 0 a + bi 0
(A − λ2 I3 )v = 0 ⇔ 0 −i 1 c + di = 0 , where a, b, c,
0 −1 −i e + fi 0
d, e, f are arbitrary real constants. This is equivalent to the system
b+c=0
−ai + b + c + di = 0
−a + d = 0
−ci + d + e + f i = 0 ⇔ ,
d + e = 0
−c − di − ei + f = 0
−c + f = 0
a + bi
so −b + ai , where a, b ∈ R is the complex eigenspace corresponding to
−a − bi
the complex eigenvalue λ2 .
[Stage 3 and Stage 4 ]: Determine a fundamental system of solutions for
the SDEs and write the general solution.
1
3x
First of all we have that Y1 (x) = C1 e 0 , where C1 ∈ R is a solution
0
for the SDEs.
If one considers2
a + bi a + bi
Y (x) = e(3+i)x −b + ai = e3x (cos x + i sin x) −b + ai
−a − bi −a − bi
a cos x − b sin x + i(b cos x + a sin x)
= e3x −b cos x − a sin x + i(a cos x − b sin x) ,
−a cos x + b sin x + i(−b cos x − a sin x)
both the real and the imaginary part of Y (x) are solutions for the SDEs.
The general solution, if one considers the real part of Y (x), is
C1 + a cos x − b sin x
Y (x) = Y1 (x) + Re Y (x) = e3x −b cos x − a sin x ,
−a cos x + b sin x
where C1 , a, b ∈ R.
2
The complex exponential will be studied in Chapter 2.
39
W 1. Determine the general solution of the following SDEs, Y 0 = AY , (the
matrix A is diagonalizable), if
4 −2 2 1 0 3
a) A = −5 7 −5 ; b) A = 2 1 2 ;
−6 6 −4 3 0 1
7 4 −1 1 1 1
c) A = 4 7 −1 ; d) A = 1 1 1 .
−4 −4 4 1 1 1
40
1 1 2 a −1
(A−λ1 I3 )vp1 = v1 ⇔ −1 −1 −2
b = 1 . This is equivalent
1 1 1 c 0
to the system (
a + b + 2c = −1
,
a+b+c=0
a 1−b −1 1
hence c = −1 and a = 1−b. Since b =
b = b 1 + 0 ,
c −1 0 −1
1
it follows that a principal vector needed is vp1 = 0 (for b = 0).
−1
0 1 2 a 0
2. (A−λ2 I3 )v = 0 ⇔ −1 −2 −2 b = 0 . This is equivalent
1 1 0 c 0
to the system
b + 2c = 0
−a − 2b − 2c = 0 ,
a+b=0
2
hence b = −2c, a = 2c and, for c = 1, v2 = −2 is an eigenvector corre-
1
sponding to λ2 = 0. Since ma (λ2 ) = mg (λ2 ) = 1, there are no corresponding
principal vectors for λ2 = 2.
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
−1 −1 1 2
x
Y1 = ex 1 , Y2 = ex 1 + 0 , Y3 = e2x −2
1!
0 0 −1 1
is a fundamental system of solutions by Proposition 1.1.10.
[Stage 4 ]: Write the general solution of the SDEs.
Y = C 1 Y1 + C 2 Y2 + C 3 Y3
−1 −x + 1 2
= C1 ex 1 + C2 ex x + C3 e2x −2 ,
0 −1 1
41
where C1 , C2 , C3 are arbitrary real constants.
b) [Stage 1 ]: Determine the eigenvalues of the matrix A.
1−λ −3 3
det(A − λI3 ) = 0 ⇔ −2 −6 − λ 13 = 0 ⇔ (λ − 1)3 = 0,
−1 −4 8−λ
42
For finding the secondprincipal vector,
one solves
0 −3 3 a 6
(A − λI3 )vp2 = vp1 ⇔ −2 −7 13
b = 0 . This is equivalent
−1 −4 7 c 1
to the system
−3b + 3c = 6
−2a − 7b + 13c = 0 ,
−a − 4b + 7c = 1
a 3b + 13 3
hence c = 2 + b and a = 3b + 13. Since b = b = b 1 +
c b+2 1
13 13
0 , it follows that the second principal vector can be vp2 = 0 .
2 2
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
3 3 6
x xx
Y1 = e 1 , Y2 = e
1 + 0 ,
1!
1 1 1
3
2 6 13
x x
Y3 = ex 1 + 0 + 0 ,
2! 1!
1 1 2
is a fundamental system of solutions by Proposition 1.1.10.
[Stage 4 ]: Write the general solution of the SDEs.
Y = C 1 Y1 + C 2 Y2 + C 3 Y3
3x2
2 + 6x + 13
3 3x + 6
x2
= C1 ex 1 + C2 ex x + C3 e x ,
1 x+1 x2 2
+x+2
2
43
W 2. Determine the general solution of the following SDEs, Y 0 = AY , in
the following situations (the matrix A is not diagonalizable):
0 1 −1 2 −1 2
a) A = 2 1 1 ; b) A = 5 −3 3 ;
0 −1 1 −1 0 −2
4 −1 1 4 1 1
c) A = 0 2 2 ; d) A = 0 2 2 .
0 −2 6 0 1 3
44
b) From E 1. b) we already know that the eigenvalues of A are λ1 = 1,
ma (λ1 ) = 1 and λ2 = 7, ma (λ2 ) = 1. We have also determined the corre-
sponding eigenvectors:
for λ1 = 1 wehave the linearly independent eigen-
−1 −1
vectors v1 = 1 and v2 = 0 ; for λ2 = 7 we have the eigenvector
0 1
1 −1 −1 1
v3 = 2 . We write the matrix C = 1 0 2 , which has the
3 0 1 3
−2 4 −2
−1 1
inverse C = −3 −3 3 .
6
1 1 1
Since x
e 0 0
eDx = 0 ex 0 ,
0 0 e7x
it follows that
5ex + e7x −ex + e7x −ex + e7x
1
eAx = CeDx C −1 = −2ex + 2e7x 4ex + 2e7x −2ex + 2e7x .
6
−3ex + 3e7x −3ex + 3e7x 3ex + 3e7x
45
which implies that λ = 4, ma (λ) = 2.
The eigenvector corresponding to λ is v = (1, 1)T and the needed principal
vector is vp = (0, 1)T .
1 0 −1 1 0
We build the matrix C = , which has the inverse C = .
1 1 −1 1
4 1
We have that the matrix J = C −1 AC = has one Jordan cell of
0 4
Jx 4x 1 x
dimension 2 corresponding to the eigenvalue λ = 4, hence e = e .
0 1
In conclusion,
Ax Jx −1 4x 1−x x
e = Ce C = e .
−x 1 + x
46
and
2e3x + 2e5x 0 −2e3x + 2e5x
1
eAx = CeJx C −1 = (2x − 1)e3x + e5x 4e3x (−2x − 1)e3x + e5x .
4
−2e3x + 2e5x 0 2e3x + 2e5x
47
We get that
Dx 1 0
e =
0 e−3x
and
1 + 2e−3x 2 − 2e−3x
Ax 1
e = .
3 1 − e−3x 2 + e−3x
x
1 + 2e−3(x−t) 2 − 2e−3(x−t)
Z
1 0
Yp = dt
0 1 − e−3(x−t) 2 + e−3(x−t)
3 e−3t
1 x 2e−3t − 2e−3x
Z
= dt
3 0 2e−3t + e−3x
−3t x
2e −3x
x
1 −3 0 − 2e t0
= 2e−3t x x
3 −3x
+ e t
−3 0 0
2 2 −3x −3x
1 3 − 3e − 2xe
= .
2 2
3 − e−3x + xe−3x
3 3
[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp . In conclusion,
14 4 −3x −3x
1 3 + 3e − 2xe
Y = 14 .
3 5
− e−3x + xe−3x
3 3
48
1 −3 3 3
b) We have that A = −2 −6 13 , B(x) = −1 , x0 = 0 and
−1 −4 8 0
0
Y0 = 0 .
2
[Stage 1 ]: We need to find eAx . The only eigenvalue of the matrix A is
λ = 1, ma (λ) = 3 (see E 2. b)). The matrix A is not diagonalizable and
one
3 6
can find as an eigenvector v = 1 and two principal vectors vp1 = 0 ,
1 1
13 3 6 13
vp2 = 0 (see E 2. b)). We construct the matrix C = 1 0 0 ,
2 1 1 2
0 1 0
−1
which has the inverse C = −2 −7 13 . We have that J = Jλ=1 =
1 3 −6
x2
1 1 0 1 x 2
0 1 1 and eJx = ex 0 1 x . It follows that
0 0 1 0 0 1
3x2 9x2 2
2
+ 1 2
− 3x −9x + 3x
2 2
eAx = ex x2 − 2x 3x2 + 1 − 7x 13x − 3x2 .
x 2 3x 2
2
−x 2
− 4x −3x2 + 7x + 1
49
[Stage 3 ]: We
Z x continue with determining a particular solution of the initial
system: Yp = eA(x−t) B(t)dt. We have that
0
Z x 3 + 3(x − t)
Yp = ex−t x − t − 1 dt.
0 x−t
Z x
Since ex−t (x − t)dt = xex + 1 − ex (using integration by parts), it
0
follows that
3xex
Yp = xex − 2ex + 2 .
xex + 1 − ex
[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp . In conclusion,
−9x2 + 3x + 3xex
Y = 13x − 3x2 + xex − 2ex + 2 .
−3x2 + 7x + xex + 2 − ex
4 −2 2 1
c) We have that A = −5 7 −5 , B(x) =
x , x0 = 0 and
−6 6 −4 0
1
Y0 = 1 .
1
[Stage 1 ]: We determine first eAx . The eigenvalues of the matrix A are
λ1 = 2, ma (λ1 ) = 2 and λ2 = 3, ma (λ2 ) = 1. The matrix A is diagonalizable
since m
g (λ1 ) = 2 and m
g (λ2) = 1. A pair of eligible eigenvectors (for λ1 ) is
1 0
v1 = 1 and v2 = 1 . For λ2 one can choose the eigenvector v3 =
0 1
−2 1 0 −2
5 . For the above selection, we have C = 1 1 5 , which has
6 0 1 6
50
−1 2 −2 2 0 0
−1
as the inverse the matrix C = 6 −6 7 . Since D = 0
2 0
2x −1 1 −1 0 0 3
e 0 0
Dx 2x
and e = 0 e 0 , it follows that
0 0 e3x
−e2x + 2e3x 2e2x − 2e3x −2e2x + 2e3x
eAx = CeDx C −1 = 5e2x − 5e3x −4e2x + 5e3x 5e2x − 5e3x .
6e2x − 6e3x −6e2x + 6e3x 7e2x − 6e3x
[Stage
Z x 3 ]: Next we determine a particular solution of the initial system:
Yp = eA(x−t) B(t)dt. We get that
0
e2(x−t) (2t − 1) + 2e3(x−t) (1 − t)
Z x
Yp = e2(x−t) (5 − 4t) + 5e3(x−t) (t − 1) dt
0 6e2(x−t) (1 − t) + 6e3(x−t) (t − 1)
−2x + 2e3x
3 2x
3x
2x − 9 3e 10e
= + − .
6 2 3
7x 3e2x 2e3x
+ −
6 2 3
[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp .
51
(
y10 = 4y1 + y2 + ex
b) , y1 (0) = 1, y2 (0) = 1;
y20 = −y1 + 2y2
0
y1 = 2y1 − 6y3 + x
c) y20 = y1 − y2 + y3 , y1 (0) = 1, y2 (0) = 1, y3 (0) = 2;
0
y3 = y1 − 3y3 + x
0
y1 = 2y1 + y2 + y3
d) y20 = y1 − 2y2 + 1 , y1 (0) = 0, y2 (0) = 0, y3 (0) = 2;
0 x
y3 = 3y1 + 3y2 + y3 + e
e) Y 0 (x) = AY (x) + B(x), where
4 0 1 0 1
A = 1 5 0 , B(x) = x , Y (0) = 4 ;
1 0 4 0 0
52
Method 2 (using Theorem 1.3.4, the Routh-Hurwitz Criterion). We
have that the Hurwitz polynomial is
p(z) = det(zI2 − A) = z 2 + 4z + 4,
53
Since ∆1 = 7 > 0, ∆2 = 7·16−10·1 = 102 > 0 and ∆3 = det H3 = 1020 > 0,
it follows that the system is asymptotically stable.
Method
3 (using Theorem 1.3.6, the Lyapunov equation). Consider
a b c
P = b d e . Then the Lyapunov equation AT P + P A = −Q becomes
c e f
−6a − 2b a − 6b − d −4c − e −8 −7 0
a − 6b − d 2b − 6d c − 4e = −7 −10 0 ,
−4c − e c − 4e −2f 0 0 −2
54
Hence
Z ∞
−6x 1 0 12 4 1 x
P = e dx
0 x 1 4 10 0 1
Z ∞
−6x 12 12x + 4
= e dx
0 12x + 4 12x2 + 8x + 10
2 1
= .
1 2
Solution. For the chosen matrix Q, the Lyapunov equation has the solu-
tion
3 2
P = .
2 4
Hence a Lyapunov function is
T 3 2 y1
V (y) = y P y = [y1 , y2 ] = 3y12 + 4y1 y2 + 4y22 .
2 4 y2
55
Indeed, V (y) = 2y12 + (y1 + 2y2 )2 > 0, for every y 6= 0 and since y10 = −y2 ,
y20 = y1 − 3y2 , we have that
V 0 (y) = 6y1 y10 + 4y10 y2 + 4y1 y20 + 8y2 y20
= −2(y12 + 4y1 y2 + 12y22 )
= −2((y1 + 2y2 )2 + 8y22 ) < 0, for every y 6= 0.
Therefore, the system is asymptotically stable.
We have that
f1 (1, 1) = −1 + 1 = 0, f2 (1, 1) = −0.5 − 1 + 2 − 3 + 2.5 = 0,
hence indeed Y = [1, 1]T is an equilibrium position.
The linearized system (1.44) has the matrix
∂f1 ∂f1
(1, 1) (1, 1)
∂y1 ∂y 2 −1 2
A = ∂f = .
2 ∂f2 −2 −1
(1, 1) (1, 1)
∂y1 ∂y2
56
The characteristic polynomial of A is
det(λI2 − A) = λ2 + 2λ + 5,
57
One obtains the differential equation
g
θ̈(t) = − sin θ(t).
l
By choosing y1 (t) = θ(t) and y2 (t) = θ̇(t), the differential equation is
transformed into the following nonlinear SDEs:
(
y˙1 = y2
g .
y˙2 = − sin y1 (t)
l
Obviously, the origin is an equilibrium state of the system. The matrix
of the linearized system is
" # " #
0 1 0 1
A= g = g .
− cos y1 0 y1 =0,y2 =0 − 0
l l
r
g
The eigenvalues of A are λ1,2 = ±i . Since Re λ1,2 = 0 and ma (λ1,2 ) =
l
mg (λ1,2 ) = 1, the system is stable, but not asymptotically stable.
58
√
R 4 − R2
1. if |R| ≤ 2, the eigenvalues are λ1,2 = ±i , so we have three
2 2
possible cases:
Many interesting examples and applications can be found in [6], [15], [25],
[31], with emphasis on problems given at the Students Mathematical Contest
’Traian Lalescu’ in [3] and [34].
For applications with Matlab and Maple, see [1], [4], [16].
59
60
Chapter 2
Functions of a Complex
Variable
61
The addition is denoted by ’+’ and it is defined as
ii) the complex number (0, 0), which is simply denoted by 0, is the null
element: z + 0 = 0 + z = z, ∀z ∈ C. Obviously, z + 0 = (x, y) + (0, 0) =
(x + 0, y + 0) = (x, y) = z and similarly 0 + z = z;
62
vi) the complex number (1, 0), which is simply denoted by 1, is the unity:
z ·1 = 1·z = z, ∀z ∈ C. It is obvious that z ·1 = (x·1−y ·0, x·0+y ·1) =
(x, y) = z;
and
−z = (−x, 0) ∈ C1 ,
63
and if z 6= 0, then
−1 x −0 1
z = 2
, 2 = , 0 ∈ C1 .
x +0 x +0 x
From i) and ii) we obtain that f is bijective and iii) and iv) imply that f
is a morphism of fields, so f is an isomorphism of fields. It follows that the
fields R and C1 are isomorphic.
64
Therefore, the isomorphism R ∼
= C1 allows a second writing for the complex
number z = (x, y), namely
z = x + iy. (2.1)
The form (2.1) is called the algebraic form of a complex number. Also x is
called the real part of the complex number z and y the imaginary part of z.
They are denoted by Re z and Im z, respectively. The x axis is called the
real axis and the y axis the imaginary axis.
We define for every complex number z = x + iy the conjugate to be
z̄ := x − iy. A very useful remark is that z z̄ = x2 + y 2 ∈ R.
Now, one associates to the complex number z = x + iy or z = (x, y)
the point M (x, y) in the xOy plane, which will be called the complex plane.
Obviously, the correspondence x + iy 7→ M (x, y) is a bijective map between
C and the set of points in the complex plane.
Also, to any point M (x, y) in the complex plane, we can associate the
polar coordinates (ρ, θ), where ρ is the distance between the point and the
origin of xOy and θ is the angle between the radius vector of the point and
the positive direction of the real axis.
y
(z)
M (x, y)
ρ
y
θ
x
0 x N
65
and
y
θ = arctan .
x
We denote by arg z the value of the argument of z for θ ∈ [0, 2π). Since
cos θ and sin θ are periodic functions of period 2π, it follows that
Now, by Euler’s formula eiθ = cos θ + i sin θ, the trigonometric form (2.2) can
be written as
z = ρeiθ , (2.3)
which is called the exponential form of the complex number z.
Since a complex number is a pair of real numbers, many concepts of real
analysis can be extended to complex analysis. For instance, one can use
|z1 − z2 | as the distance between the points which correspond to z1 and z2 in
the complex plane. Then one can define as the open disk of radius R centered
at z0 ∈ C the set
{z ∈ C : |z − z0 | < R}.
This disk is also called an R-neighborhood of z0 . Using this, one can define
the notion of convergence of a sequence of complex numbers.
Definition 2.1.4. We say that the sequence (zn )n≥1 ⊂ C is convergent and
has the limit z0 ∈ C, if ∀ > 0, ∃N ∈ N such that for ∀n ∈ N, n > N , we
have that
|zn − z0 | < .
For some problems it is necessary to extend the complex plane with one
point.
|zn | > .
It follows that the above sequence has no limit. One adopts the convention
that all indefinitely increasing sequences have the limit z = ∞. This is called
the point at infinity and it represents the analogue of ±∞ from R.
Each complex number z 6= 0 is characterized by a pair (ρ, θ). But z = 0
is characterized by ρ = 0 and an undetermined θ. Since the point at infinity
66
is the limit of all indefinitely increasing sequences, one can say that these
sequences tend to ∞, hence the point at infinity is characterized by ρ = ∞
(the ’real’ infinity, i.e the limit of all increasing divergent real sequences) and
an undetermined θ.
The set C=C
e ∪ {∞} is called the extended complex plane.
67
2. we say that D is path-wise connected if any two points in D can be
joined by a (piecewise-smooth) curve L ⊂ D;
5. the domain D is called simply connected if any simple closed curve can
be shrunk to a point by continuous deformations without crossing the
boundary of D.
Remark 2.2.3. Regarding the last definition we can make the following
observations:
1. there is an equivalent definition of connectedness for open sets in terms
of curves: an open set D is connected if and only if any two points in
D can be joined by a curve L ⊂ D;
68
(a) A Simply Connected Do- (b) A Doubly Connected Domain
main
The sets l1 , l2 , . . . , ln , . . ., which are not included into the domain, are
called lacunas.
69
The functions u and v are called the real and the imaginary part of f
respectively and they are denoted by u = Re f and v = Im f .
Since Ce is endowed with a topology, the properties of functions defined
on a topological space can be adapted for complex functions.
For instance, the notions of limit and continuity are the following.
If we would like to encompass the case when z0 , l ∈ C,e then we have that
l is the limit of the function f at the point z0 if and only if for every sequence
(zn )n ⊂ D such that lim zn = z0 , it follows that lim f (zn ) = l.
n→∞ n→∞
f (z) = az + b,
70
2. The inverted function f : C∗ → C,
1
f (z) = ,
z
where C∗ = C \ {0}. For z = x + iy ∈ C∗ , one obtains by multiplication with
the conjugate x − iy,
1 x − iy
f (z) = = 2 ,
x + iy x + y2
x −y
hence Re f = and Im f = 2 .
x2 +y 2 x + y2
We can use the last form indicated at the beginning at the paragraph,
namely z = ρeiθ , Z = ReiΘ . We get that
1 1 1
Z = f (z) = = iθ = e−iθ ,
z ρe ρ
1
i.e. the modulus of f (z) is R = and the argument is Θ = −θ (Figure 2.3).
ρ
ρ z
θ 1 x
0 -θ
1 1
ρ Z= z
f (z) = ez .
71
For z = x + iy and using Euler’s formula eiy = cos y + i sin y, one obtains
z = ρei(θ+2π) + a,
72
hence the branch fk goes to fk+1 .
In the same manner, we can prove that fn−1 goes to f0 , since
θ+2π+2(n−1)π θ θ
ei n = ei n +i2π = ei n .
The points a with this property are called algebraic branch points.
z
ρ
θ
a
x
O
In order to prevent passing from one branch to another, one removes from
the complex plane the cut B, that is a half-line starting at a. Therefore, the
branches fk : C \ B → C are separated and well defined. In order to prevent
passing from one branch to another, one removes from the complex plane the
cut B, that is a half-line starting at a. Therefore, the branches fk : C\B → C
are separated and well defined.
1
If we replace z by , the point at infinity becomes u = 0. Then
u
r √
n
1 n 1 1 − ua
f = −a= √ .
u u n
u
73
y
θ
a
x
O
C
x
O
√ θ
n ρei n , which is the extension of the real radical
The branch f√ 0 (z) =
function f (x) = n x − a to the complex plane, is called the principal branch
of the multiple-valued function.
5. The logarithmic function. This will be defined for every z ∈ C∗ and
it will be denoted by Ln z. By definition, this function is the inverse of the
exponential function. But since the exponential function is periodic, with
period 2πi, hence it is not injective, one considers restrictions to horizontal
strips
74
On each strip Sk , the exponential is injective. The image of each hori-
zontal straight line is
Therefore, ez is bijective with the range C\R+ and it has an inverse on each
strip Sk .
6πi
S2
4πi
S1
2πi
S0
x
O
S−1
−2πi
S−2
−4πi
Ln z = Z ⇔ eZ = z, z ∈ C∗ .
75
As in the case of the radical function, if z describes a circle of radius ρ
centered in 0 (or any closed curve that winds around 0), we will have that
z = ρei(θ+2π) , hence the branch fk would jump to fk+1 . The point 0 is said
to be a logarithmic branch point for Ln z.
1
For z = , the point at infinity corresponds to u = 0 and one obtains
u
Ln z = −Ln u, hence u = 0 is a branch point, i.e. the point at infinity is
another logarithmic point for Ln z.
Also as in the case of the radical function, R ≥ 0 is a branch cut that
forbids the movement of z around 0, hence the branches fk are separated and
well defined.
Similarly, the function Ln (z − a), a ∈ C, has the branch points z = a
and z = ∞.
2.4 Exercises
E 12. Find the real part of the complex number z (i.e. Re z), if
π
a) z = 2ei 2 + Ln (1 − i);
b) z = (1 + i)2 ei(π+i) .
π
Solution. a) We have that ei 2 = e0 (cos π2 + i sin π2 ) = i. On the other
hand,
Ln (1 − i) = ln |1 − i| + i(arg (1 − i) + 2kπ), k ∈ Z.
√
As |1 − i| = 2 and arg (1 − i) = π + π4 = 5π 4
(one associates to the complex
number 1−i the point in plane M (1, −1), which belongs to the 3rd quadrant),
it follows that
√ 5π
Ln (1 − i) = ln 2 + i( + 2kπ), k ∈ Z.
4
√ √
Since z = 2i + ln 2 + i( 5π4
+ 2kπ), we have that Re z = ln 2.
b) First of all we get that (1 + i)2 = 1 + 2i − 1 = 2i. At the same time
we have that ei(π+i) =e−1 (cos π + i sin π)=−e−1 . Then
so Re z = 0.
76
W 10. Find the imaginary part of the complex number z (i.e. Im z), if
π
a) z = e−1+2πi − 2ie1−i 2
r √
3 1 + i 3
b) z = ;
2
c) z = (−i)−i (2i + 1).
Answer. a) Im z = 0;
π 2kπ
b) Im z = sin + , k ∈ {0, 1, 2};
9 3
3π
c) (−i)−i = e(−i)Ln (−i) , so Im z = 2e 2 +2kπ , k ∈ Z.
77
ey + e−y ey − e−y
Using cosh y = and sinh y = (the hyperbolic sine and
2 2
cosine), it follows that
2 cos x cosh y − 2i sin x sinh y
f (z) = .
(x + 1) + iy
78
c) f (z) = z · Ln z̄.
ex (x2 − y 2 ) cos y + 2xy sin y
Answer. a) Re f (x, y) = ;
(x2 − y 2 )2 + 4x2 y 2
p
b) Re f (x, y) = x2 − y 2 + (x2 − y 2 − 1)2 + 4x2 y 2 − e−y cos x;
c) Re f (ρ, θ) = ρ ln ρ cos θ − (2k + 1)π − θ sin θ .
hence
−i − 1 + 1 − i −i − 1 − 1 + i
z1 = = −i and z2 = = −1.
2 2
√ √
b) We first observe that z 4 − i 3 = 0 ⇔ z 4 = i 3. By extracting the
root of order 4, one obtains
q p
4 θ + 2kπ θ + 2kπ
z = |i 3| cos + i sin , k = 0, 3.
4 4
√ π
We have that θ = arg (i 3) = , so
2
√ π π
8 + 2kπ + 2kπ
z = 3 cos 2 + i sin 2 .
4 4
79
The solutions are obtain by making k to be 0, 1, 2 or 3:
√8
π π
z1 = 3 cos + i sin , k = 0;
8 8
√8
5π 5π
z2 = 3 cos + i sin , k = 1;
8 8
√8
9π 9π
z3 = 3 cos + i sin , k = 2;
8 8
√
8
13π 13π
z4 = 3 cos + i sin , k = 3.
8 8
Remark. The algebraic forms of the solutions are obtained r by using
π 1 + cos π4
standard trigonometric formulas and the fact that cos = =
8 2
s √
2+ 2
.
4
1
c) Let us denote t = eiz . It follows that t − = −4, so t2 + 4t − 1 = 0.
t √
Since the discriminant
√ is ∆ = 20, we have that the solutions are t1 = −2+ 5
and t2 = −2 − 5. √
From eiz = t1 = −2 + 5 we deduce that
√ √
iz = Ln (−2 + 5) = ln( 5 − 2) + i · 2kπ, k ∈ Z,
√
hence z = 2kπ − i ln( 5 −√2).
From eiz = t2 = −2 − 5 we deduce that
√ √
iz = Ln (−2 − 5) = ln( 5 + 2) + i(π + 2kπ), k ∈ Z,
√
hence z = (2k + 1)π − i ln( 5 + 2).
a) z 4 + 4z 2 + 4 = 0;
√ 3π
b) z 3 + 2ei 4 = 0;
eiz − e−iz
c) = 4i.
i(eiz + e−iz )
80
3π
√
2
Answer. a) z = ±2i; b) ei 4 = cos 3π
4
+ i sin 3π
4
= 2
(−1 + i), hence the
3
equation becomes z = 1 − i and
√ 7π
4
+ 2kπ 7π
4
+ 2kπ
z = 2 cos + i sin , k = 0, 2;
3 3
eiz − e−iz 3
c) iz −iz
= 4i ⇔ 5eiz + 3e−iz = 0 ⇒ e2iz = − , hence
i(e + e ) 5
1 3 π
z = − i ln + (2k + 1) , k ∈ Z.
2 5 2
81
82
Chapter 3
Complex Differentiation
83
Definition 3.1.1. We say that the function f is Fréchet differentiable at
z0 ∈ D if there exist
∂u ∂v ∂u ∂v
(x0 , y0 ) = (x0 , y0 ), (x0 , y0 ) = − (x0 , y0 ) (3.2)
∂x ∂y ∂y ∂x
Proof. We have that a, h and ω(z, h) are complex numbers, hence they have
the form
Using Definition 3.1.1, 3.1, 3.2 and the fact that two complex numbers
are equal if and only if their real and imaginary parts are respectively equal,
one obtains the following chain of equivalent statements:
84
(
u(x0 + h1 , y0 + h2 ) − u(x0 , y0 ) = a1 h1 − a2 h2 + ω1
v(x0 + h1 , y0 + h2 ) − v(x0 , y0 ) = a1 h2 + a2 h1 + ω2 .
We have that the previous equalities are equivalent to the fact that the real
functions u and v are Fréchet differentiable at (x0 , y0 ) and
∂u ∂u
(x0 , y0 ) = a1 , (x0 , y0 ) = −a2 ,
∂x ∂y
(3.4)
∂v ∂v
(x0 , y0 ) = a2 , (x0 , y0 ) = a1
∂x ∂y
The equalities (3.4) are equivalent to
∂u ∂v
(x0 , y0 ) = a1 = (x0 , y0 ),
∂x ∂y
∂u ∂v
(x0 , y0 ) = −a2 = − (x0 , y0 ).
∂y ∂x
85
Theorem 3.1.5. A function f : D → C, f (z) = u(x, y) + iv(x, y) is analytic
on D if and only if the real functions u and v are Fréchet differentiable on
D and they verify the Cauchy-Riemann conditions on D:
∂u ∂v ∂u ∂v
= , =− .
∂x ∂y ∂y ∂x
Remark 3.1.8. One can prove that the usual properties of derivatives are
true for complex functions. For instance, if f and g are analytic functions
f
on the domain D, then af + bg, a, b ∈ C, f g, (for g 6= 0) are analytic on
g
D and
0
0 0 0 0 0 0 f f 0g − f g0
(af + bg) = af + bg , (f g) = f g + f g , = ,
g g2
where a, b ∈ C.
If f : D1 → D2 is analytic on the domain D1 and g : D2 → C is analytic
on D2 , then
(g ◦ f )0 (z) = g 0 (f (z))f 0 (z).
The last formula is known as the chain rule.
If f : D1 → D2 is analytic on the domain D1 and |f 0 (z)| = 6 0 in a
−1
neighborhood of a point z0 ∈ D1 , then the inverse function f is well defined
and it is analytic on a neighborhood of a point Z0 = f (z0 ) ∈ D2 and
1
(f −1 )0 (Z0 ) = .
f 0 (z0 )
86
Remark 3.1.9. One can show that the extensions to C of the elementary
real functions are analytic and verify similar formulas. For instance,
∂ 2u ∂ 2u
= .
∂x∂y ∂y∂x
Due to the fact that f is analytic, u and v verify the Cauchy-Riemann con-
ditions, one obtains
∂ 2u ∂ 2u
∂ ∂u ∂ ∂v ∂ ∂v ∂ ∂u
= = = = − = − ,
∂x2 ∂x ∂x ∂x ∂y ∂y ∂x ∂y ∂y ∂y 2
∂ 2u ∂ 2u
hence ∆u = + = 0. Similarly, ∆v = 0.
∂x2 ∂y 2
87
Obviously, since u and v are harmonic, the function f = u+iv is harmonic
too.
It was shown that if f = u + iv is an analytic function, then u and v are
∂u ∂v
related by the Cauchy-Riemann conditions (see Theorem 3.1.5) = ,
∂x ∂y
∂u ∂v
=− and they are harmonic functions. In this case one says that v
∂y ∂x
is a harmonic conjugate of u (and u is a harmonic conjugate of v). It can
be proved that if v is a harmonic conjugate of u, then v + a is a harmonic
conjugate of u, ∀a ∈ C. This is left as an exercise for the reader.
The function F (z) is called the analytic continuation of the function f (x)
to the domain D.
This theorem allows us to extend elementary real functions to the complex
plane. For instance, ez = ex (cos y + i sin y) is analytic and for z = x ∈ R,
i.e. for y = 0, we have that ez = ex (cos 0 + i sin 0) = ex . Therefore, ez is the
analytic continuation of ex to C. Then, starting with Euler’s formulas for the
eix − e−ix eix + e−ix
real trigonometric functions sin x = and cos x = , one
2i 2
can determine their analytic continuations, i.e. the complex trigonometric
functions defined by
eiz − e−iz
sin z = ,
2i
eiz + e−iz
cos z = .
2
One can easily prove that the fundamental formula sin2 z +cos2 z = 1 remains
valid for any complex number z.
88
3.4 Determination of Analytic Functions
One can prove the following result.
By imposing the second equation and by using the fact that u is a harmonic
function, we can show that
Z y 2 Z y 2
∂u ∂v ∂ u 0 ∂ u
(x, y) = − (x, y) = − 2
(x, t)dt − C (x) = 2
(x, t)dt − C 0 (x),
∂y ∂x y0 ∂x y0 ∂y
hence
∂u ∂u ∂u
(x, y) = (x, y) − (x, y0 ) − C 0 (x).
∂y ∂y ∂y
∂u
This implies that C 0 (x) = − (x, y0 ), so
∂y
Z x
∂u
C(x) = − (t, y0 )dt + C1 ,
x0 ∂y
89
and v(x0 , y0 ) = C1 , which shows that v is unique up to addition of a constant
C1 .
From f (z) = u(x, y) + iv(x, y) one obtains Z0 = f (z0 ) = u(x0 , y0 ) + iC1 ,
hence Z0 − u(x0 , y0 ) = iC1 and
Z y Z x
∂u ∂u
f (z) = u(x, y) − u(x0 , y0 ) + i (x, t)dt − (t, y0 )dt + Z0 .
y0 ∂x x0 ∂y
Method 2. We will divide this method into several steps, so we can regard
it as an actual algorithm.
∂u ∂u
Step 1. Calculate f 0 (z) = (x, y) − i (x, y) (by (3.5) ii));
∂x ∂y
Step 2. Replace y with 0 in f (z) (i.e. determine the restriction of f 0 to
0
∂u ∂u
f 0 (x) = (x, 0) − i (x, 0) =: g(x);
∂x ∂y
90
Method 1. As f (0) = i, it follows that x0 = 0, y0 = 0 and Z0 = i. Then
Z y Z x
∂u ∂u
f (z) = u(x, y) − u(x0 , y0 ) + i (x, t)dt − (t, y0 )dt + Z0
y0 ∂x x0 ∂y
= ex cos y + x2 − y 2 − u(0, 0)+
Z y Z x
x t
+i (e cos t + 2x)dt − (−e sin 0 − 2 · 0)dt + i
0 0
y
= ex cos y + x2 − y 2 − 1 + i((ex sin t + 2xt)0 − 0) + i
= ex cos y + x2 − y 2 − 1 + i(ex sin y + 2xy) + i
= ex (cos y + i sin y) + x2 − y 2 + 2ixy − 1 + i
= ez + z 2 − 1 + i.
where C is a constant;
Step 4. Replace x by z, thus f (z) = ez + z 2 + C;
Step 5. Replace z with 0, so 1 + C = i and C = i − 1.
The analytic function is f (z) = ez + z 2 + i − 1.
Problem 2. Given a harmonic function v in a simply connected domain
D and two points z0 , Z0 ∈ C, determine the analytic function f on D such
that
Im f = v and f (z0 ) = Z0 .
Similarly, by adapting Method 1, one obtains
Z x Z y
∂v ∂v
u(x, y) = (t, y0 )dt − (x, t)dt + C1 .
x0 ∂y y0 ∂x
91
For Method 2, we change Step 1 with the following:
∂v ∂v
Step 1. Calculate f 0 (z) = (x, y) + i (x, y) (by (3.5 iii)).
∂y ∂x
y
Example 3.4.3. Consider v(x, y) = − 2xy, (x, y) 6= (0, 0). Find an
x2 + y 2
analytic function f = u + iv such that f (i) = i.
Solution. No matter what method we use we should verify that v is a
harmonic function. This is an exercise for the reader.
Method 1. We have that
∂v x2 + y 2 − y · 2y x2 − y 2
(x, y) = − 2x = − 2x,
∂y (x2 + y 2 )2 (x2 + y 2 )2
and
∂v −y · 2x −2xy
(x, y) = 2 − 2y = − 2y.
∂x (x + y 2 )2 (x2 + y 2 )2
∂u ∂v
From the second Cauchy-Riemann condition, = − , it follows that
∂y ∂x
∂u −2xy
= 2 − 2y.
∂y (x + y 2 )2
∂u ∂v
The first Cauchy-Riemann condition, = , leads to
∂x ∂y
x2 + y 2 − 2x2 x2 − y 2
0
− + C (x) = 2 − 2x,
(x2 + y 2 )2 (x + y 2 )2
which is equivalent to
x2 − y 2 0 x2 − y 2
+ C (x) = − 2x,
(x2 + y 2 )2 (x2 + y 2 )2
92
−x
We find that u(x, y) = + y 2 − x2 + C1 and
x2
+ y2
−x 2 2 y
f (z) = u + iv = 2 + y − x + C1 + i − 2xy
x + y2 x2 + y 2
x − iy
=− 2 + y 2 − x2 − 2ixy + C1
x + y2
1
= − − z 2 + C1 .
z
1
By imposing the condition f (i) = i, one can find − − i2 + C1 = i, so
i
C1 = −1.
1
The analytic function is f (z) = − − z 2 − 1, z 6= 0.
z
Method 2. We follow the algorithm presented before.
Step 1. Calculate
∂v ∂v
f 0 (z) = (x, y) + i (x, y).
∂y ∂x
We have that
∂v x2 + y 2 − y · 2y x2 − y 2
(x, y) = − 2x = − 2x,
∂y (x2 + y 2 )2 (x2 + y 2 )2
and
∂v −y · 2x −2xy
(x, y) = 2 − 2y = − 2y,
∂x (x + y 2 )2 (x2 + y 2 )2
so
x2 − y 2
0 −2xy
f (z) = 2 − 2x + i − 2y ;
(x + y 2 )2 (x2 + y 2 )2
Step 2. Replace y with 0 in f 0 (z). Then z = x + iy becomes z = x and
x2 1
f 0 (x) = 4
− 2x + i · 0 = 2 − 2x;
x x
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z
1 1
f (x) = 2
− 2x dx = − − x2 + C,
x x
93
where C is a constant;
1
Step 4. Replace x by z, thus f (z) = − − z 2 + C;
z
1 2
Step 5. Replace z with i, so − − i + C = i and C = −1.
i
1
The analytic function is f (z) = − − z 2 − 1, z 6= 0.
z
3.5 Exercises
E 15. Find the real part of the complex number z (i.e. Re z), if
a) z = (1 + i)2 sin(π + i);
2
π
b) z = cos i + .
4
Solution. a) We have that (1 + i)2 = 1 + 2i − 1 = 2i and
W 13. Find the imaginary part of the complex number z (i.e. Im z), if
94
π π
a) z = (1 − 2i) cos − sin +i ;
6 4
2
π 2
π
b) z = cos i + − sin i + .
6 6
√ √
√ 3 2 −1 2
hence Im z = − 3 + e + e;
4 4
b) We have that
π π
e−2+i 3 + e2−i 3
z= ,
2
√
3
so Im z = − sinh 2.
2
2 cos z
a) f (z) = , z 6= −2;
z+2
eiz + e−iz
2 cos z 2 eiz + e−iz
f (z) = = 2 = .
z+2 z+2 z+2
95
By taking z = x + iy, one obtains
b) Using the definitions of complex functions sin and cos, it follows that
96
hence
cos y(e−x − ex ) sin(2x)(e−2y − e2y
Im f = − −
2 2
= cos y sinh x + sin(2x) sinh(2y).
a) sin z = 2i;
eiz − e−iz
= 2i ⇔ eiz − e−iz = −4.
2i
1
Denote t = eiz . It follows that t − = −4, so t2 + 4t − 1 = 0. Since
t√ √
∆ = 20, we have the solutions
√ t1 = −2 + 5 and t2 = −2 − 5.
From eiz = t1 = −2 + 5 we get that
√ √
iz = Ln (−2 + 5) = ln( 5 − 2) + i · 2kπ, k ∈ Z,
√
hence z = 2kπ − i ln( 5 − 2).
97
√
From eiz = t2 = −2 − 5 we get that
√ √
iz = Ln (−2 − 5) = ln( 5 + 2) + i(π + 2kπ), k ∈ Z,
√
hence z = (2k + 1)π − i ln( 5 + 2);
b) Using the definitions of sin and cos, the equation becomes
a) cos z = 2;
π
b) tan z = 4i, z 6= + kπ, k ∈ Z.
2
√
Answer. a) z = −i ln(2 ± 3) + 2kπ, k ∈ Z;
eiz − e−iz 3
b) We have that iz −iz
= 4i ⇔ 5eiz + 3e−iz = 0 ⇒ e2iz = − ,
i(e + e ) 5
hence
1 3 π
z = − i ln + (2k + 1) , k ∈ Z.
2 5 2
a) f (z) = zez ;
b) f (z) = cos(z̄).
∂u ∂v ∂u ∂v
(x0 , y0 ) = (x0 , y0 ), (x0 , y0 ) = − (x0 , y0 )
∂x ∂y ∂y ∂x
at every (x0 , y0 ) ∈ R2 .
98
a) Taking z = x + iy, we have that
f (z) = (x + iy)ex (cos y + i sin y) = ex (x cos y − y sin y) + iex (x sin y + y cos y),
hence
u(x, y) = Re f = ex (x cos y − y sin y)
and
v(x, y) = Im f = ex (x sin y + y cos y).
Obviously, u and v are differentiable on R2 .
Now let us compute the partial derivatives necessary for Cauchy-Riemann
conditions (3.2) at an arbitrary point (x, y) ∈ R2 . We obtain that
∂u
= ex (x cos y − y sin y) + ex cos y = ex (x cos y + cos y − y sin y),
∂x
∂v
= ex (x cos y + cos y − y sin y),
∂y
∂u
= ex (−x sin y − sin y − y cos y),
∂y
∂v
= ex (x sin y + y cos y) + ex sin y = ex (x sin y + sin y + y cos y).
∂x
∂u ∂v
From the first two equalities it follows that = (i.e. the first
∂x ∂y
Cauchy-Riemann condition is satisfied) and from the last two, one obtains
∂u ∂v
+ = 0 (i.e. the second Cauchy-Riemann condition is true). Since
∂y ∂x
both Cauchy-Riemann conditions are verified at every point (x, y) ∈ R2 , the
function f is analytic in C;
b) Taking z = x + iy, we have that
ei(x−iy) + e−i(x−iy)
f (z) = cos(x + iy) = cos(x − iy) =
2
eix+y + e−ix−y ey (cos x + i sin x) + e−y (cos x − i sin x)
= =
2 2
ey + e−y ey − e−y
= cos x + i sin x = cos x cosh y + i sin x sinh y,
2 2
99
hence
100
∂ 2u
= 2 cos y sinh x + x cos y cosh x − y sin y sinh x,
∂x2
∂ 2u
= −x cos y cosh x − 2 cos y sinh x + y sin y sinh x,
∂y 2
we get that ∆u = 0, hence the necessary condition that u is harmonic is
fulfilled.
∂u ∂u
Step 1. Calculate f 0 (z) = (x, y) − i (x, y) (by (3.5) ii)). We have
∂x ∂y
that
f 0 (z) = cos y(cosh x + x sinh x) − y sin y cosh x−
= i(−x sin y cosh x − sinh x(sin y + y cos y));
Step 2. Replace y with 0 in f 0 (z).Then z = x + iy becomes z = x and
f 0 (x) = cosh x + x sinh x − i · 0 = cosh x + x sinh x.
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z Z
f (x) = (cosh x + x sinh x)dx = sinh x + x sinh xdx.
= x cosh x − sinh x + C,
where C is a constant. So
f (x) = sinh x + x cosh x − sinh x + C = x cosh x + C;
Step 4. Replace x by z, thus f (z) = z cosh z + C;
Step 5. Replace z with 1, so C = 1.
The analytic function is f (z) = z cosh z + 1.
y
b) Let us prove that v(x, y) = Im f = y + arctan is a harmonic
x
2 2
∂ v ∂ v
function. For that, we calculate ∆v = + . Since
∂x2 ∂y 2
∂v 1 y 0 −y
= y 2 · = 2 ,
∂x x x x + y2
1+
x
101
∂v 1 y 0 x
=1+ y 2 · =1+ 2 ,
∂y x y x + y2
1+
x
∂ 2v y 2xy
2
= 2 2 2
· 2x = 2 ,
∂x (x + y ) (x + y 2 )2
∂ 2v −x −2xy
2
= 2 2 2
· 2y = 2 ,
∂y (x + y ) (x + y 2 )2
we get that ∆v = 0, hence the necessary condition that v is harmonic is
fulfilled.
∂v ∂v
Step 1. Calculate f 0 (z) = (x, y) + i (x, y) ((3.5) iii)). We have that
∂y ∂x
x y
f 0 (z) = 1 + −i 2 ;
x2 +y 2 x + y2
1
f 0 (x) = 1 + ;
x
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z
1
f (x) = 1+ dx = x + ln x + C,
x
where C is a constant;
Step 4. Replace x by z, thus f (z) = z + Ln z + C;
Step 5. Replace z with i, so C = −Ln i.
The analytic function is f (z) = z + Ln z − Ln i, z 6= 0.
102
e) Im f = ex+y sin(x − y), f (0) = i;
g) Re f = ϕ(x2 − y 2 ), ϕ ∈ C 2 ;
y
h) Im f = ϕ , (x, y) 6= (0, 0), ϕ ∈ C 2 ;
x
i) Re f = x2 − y 2 + ex ϕ(y), ϕ ∈ C 2 ;
j) Re f + Im f = ϕ(x2 + y 2 ), ϕ ∈ C 2 ;
k) Re f + ϕ(Im f ) = x2 − y 2 , ϕ ∈ C 2 .
f) f (z) = z 2 ez ;
g) Denote by t = x2 − y 2 . Let us impose u(x, y) = Re f to be a harmonic
function. We have that
∂u ∂u
= 2xϕ0 (t), = −2yϕ0 (t),
∂x ∂y
103
and
∂ 2u 2 00 0 ∂ 2u
= 4x ϕ (t) + 2ϕ (t), = 4y 2 ϕ00 (t) − 2ϕ0 (t).
∂x2 ∂y 2
This implies that ∆u = 4tϕ00 (t) = 0, hence ϕ00 (t) = 0 and integrating with
respect to t, it follows that ϕ(t) = c1 t + c2 , where c1 and c2 are arbitrary real
constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .
If ϕ(t) = c1 t + c2 , then u(x, y) = c1 (x2 − y 2 ) + c2 and one can find
f (z) = c1 z 2 + C, C ∈ C;
y
h) Denote t = . Let us impose v(x, y) = Imf to be a harmonic function.
x
Then
∂v y 0 ∂v 1
= − 2 ϕ (t), = ϕ0 (t)
∂x x ∂y x
and
∂ 2v y 2 00 2y 0 ∂ 2v 1
2
= 4 ϕ (t) + 3 ϕ (t), 2
= 2 ϕ00 (t).
∂x x x ∂y x
2
1 y
2y
This implies that ∆u = 2 2 + 1 ϕ00 (t) + 3 ϕ0 (t) = 0 or (t2 + 1)ϕ00 (t) +
x x x
2tϕ0 (t) = 0.
ϕ00 (t) 2t
The last equation is equivalent to 0 =− 2 and integrating with
ϕ (t) t +1
respect to t, it follows that ln ϕ0 (t) = − ln(t2 + 1) + c1 , hence ln ϕ0 (t) =
c1 c1
ln 2 and ϕ0 (t) = 2 . Integrating once again with respect to t, one
t +1 t +1
obtains ϕ(t) = c1 arctan t + c2 , where c1 , c2 are arbitrary real constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .
y
If ϕ(t) = c1 arctan t + c2 , then v(x, y) = c1 arctan + c2 and one can find
x
f (z) = c1 Ln z + C, C ∈ C;
i) Let us impose u(x, y) = Re f to be a harmonic function. It follows that
∆u = ex (ϕ00 (y) + ϕ(y)) = 0, hence ϕ00 (y) + ϕ(y) = 0.
One recognizes this form as a differential equation of order 2, linear,
homogenous and with constant coefficients and having the solution ϕ(y) =
c1 cos y + c2 sin y, where c1 , c2 are arbitrary real constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .
104
If ϕ(y) = c1 cos y + c2 sin y, it follows that
u(x, y) = x2 − y 2 + ex (c1 cos y + c2 sin y)
and f (z) = z 2 + ez (c1 − ic2 ) + C, C ∈ C;
j) Denote t = x2 + y 2 . Using the hypotheses u(x, y) + v(x, y) = ϕ(t), one
can determine the following:
∂u ∂v ∂u ∂v
+ = 2xϕ0 (t), + = 2yϕ0 (t),
∂x ∂x ∂y ∂y
∂ 2u ∂ 2v 2 00 0 ∂ 2u ∂ 2v
+ = 4x ϕ (t) + 2ϕ (t), + = 4y 2 ϕ00 (t) + 2ϕ0 (t).
∂x2 ∂x2 ∂y 2 ∂y 2
It follows that ∆u + ∆v = 4tϕ00 (t) + 4ϕ0 (t). But, the functions u and v are
both harmonic functions, so ∆u = 0 and ∆v = 0, hence
ϕ00 (t) 1
tϕ00 (t) + ϕ0 (t) = 0 ⇔ 0
= − ⇔ ln ϕ0 (t) = − ln t + c1 ,
ϕ (t) t
so
c1
ϕ0 (t) = ⇒ ϕ(t) = c1 ln t + c2 ,
t
where c1 , c2 are arbitrary real constants.
So, u + v = c1 ln(x2 + y 2 ) + c2 . We derive this with respect to x and with
respect to y. One obtains
∂u ∂v 2c1 x
+ = 2
x + y2
∂x ∂x
∂u ∂v 2c1 y
+ = 2
∂y ∂y x + y2
Since the function f is analytic, using the Cauchy-Riemann conditions
(3.2), it follows that
∂u ∂u 2c1 x
− = 2
x + y2
∂x ∂y
∂u ∂u 2c1 y
+ = 2
∂y ∂x x + y2
∂u ∂u
By solving this system with the unknowns and , one obtains
∂x ∂y
∂u c1 (x + y) ∂u c1 (y − x)
= 2 2
, = 2
∂x x +y ∂y x + y2
105
and
∂u ∂u c1 (x + y) c1 (y − x)
f 0 (z) = −i = 2 2
−i 2 .
∂x ∂y x +y x + y2
It follows that f (z) = c1 (1 + i)Ln z + C, C ∈ C;
c1
k) f (z) = (1 + i)z 2 2 + C, c1 ∈ R, C ∈ C.
c1 + 1
∂ 2v 2 2
2
= 2ex −y (2x4 − 2x2 y 2 + 5x2 − y 2 + 1),
∂x
∂ 2v 2 2
2
= 2ex −y (−2y 4 + 2x2 y 2 + 5y 2 − x2 − 1),
∂y
2 2
one obtains ∆v = 4ex −y (x2 +y 2 )(x2 −y 2 +2) 6= 0, if x2 −y 2 +2 6= 0, hence the
function u is not harmonic on R2 and, as a consequence of Theorem 3.4.1,
one can not determine an analytic function f , unless Im f is a harmonic
function.
106
Chapter 4
Complex Integration
Z
Complex integrals f (z)dz are line integrals in the complex plane, where
Γ
f is a single-valued complex function defined on an open subset of C and it
is integrated over a piecewise smooth curve in the complex plane.
The most important relations between the analiticity of a function and
the value of a specific integral over a closed contour are given by the Cauchy
theorems (1814, 1825, 1831).
Complex line integrals are used, for example, in quantum mechanics, in
determining probability amplitudes in quantum scattering theory etc..
107
We associate to f , δ and (τj ) the integral sum
n
X
σδ (f ) = f (ζj )(zj − zj−1 ),
j=1
where
zj = z(tj ) = x(tj ) + iy(tj ) =: xj + iyj
and
ζj = z(τj ) = x(τj ) + iy(τj ) =: ξj + iηj .
108
Theorem 4.1.2. If f is piecewise continuous function on Γ, then f is inte-
grable over Γ and
Z Z Z
f (z)dz = udx − vdy + i vdx + udy. (4.1)
Γ Γ Γ
Proof. Let us calculate the integral sums; the indices k are omitted, except
in δk . We have that
n
X
σδk (f ) = f (ζj )(zj − zj−1 )
j=1
Xn
= [u(ξj , ηj ) + iv(ξj , ηj )][xj − xj−1 + i(yj − yj−1 )]
j=1
Xn
= [u(ξj , ηj )(xj − xj−1 ) − v(ξj , ηj )((yj − yj−1 )]+
j=1
Xn
+i [u(ξj , ηj )(yj − yj−1 ) + v(ξj , ηj )((xj − xj−1 )].
j=1
Here we have the integral sums for two real line integrals over Γ. Since
u and v are piecewise continuous on Γ, they are integrable over Γ, hence the
limits of the
Z integral sums as Z k → ∞ exist, are finite and they are the line
integrals udx − vdy and vdx + udy. It follows that lim σδk (f ) exists,
Γ k→∞
Z Γ
is finite and it is equal to f (z)dz.
Γ
One obtains (4.1) by taking the limit as k → ∞ in the equalities above.
Remark 4.1.3. Relation (4.1) shows that the complex integral is a pair
of real line integrals. This implies that the complex integral inherits the
properties of the real one. For instance we have that
Z Z Z
f (z)dz = f (z)dz + f (z)dz,
Γ1 ∪Γ2 Γ1 Γ2
Z Z Z
(af (z) + bg(z))dz = a f (z)dz + b g(z)dz,
Γ Γ Γ
109
where a, b ∈ C, Z Z
f (z)dz = − f (z)dz,
BA AB
where AB is an arc of a curve.
The length of a curve Γ is denoted by lΓ and it is defined as the supremum
of the lengths of all inscribed polygonal paths. One can show that
Z b
lΓ = |z(t)|dt,
a
Using the fact that |zj − zj−1 | is the distance between the points zj and
n
X
zj−1 , we have that |zj − zj−1 | is the length of the polygonal line corre-
j=1
sponding to the partition δk (see Figure 4.2).
These lengths form an increasing sequence and their limit is lΓ . Taking
the limit as k → ∞ (with δk+1 δk , ∀k ∈ N∗ and lim ν(δk ) = 0), one
k→∞
obtains (4.2) from the above inequality.
110
Figure 4.2: The Partition δ
111
Proof. Let f (z) = u(x, y) + iv(x, y), z = x + iy, u, v ∈ C 1 (D) and ∆ the
domain bounded by Γ. (Figure 4.3)
112
The following similar result holds true.
Now let us consider the case of n-uply connected domains (Figure 4.5
(a)).
Proof. We will use an induction type argument. First consider the case n = 1
(Figure 4.4 (a)).
(a) The Doubly Connected Do- (b) The Simply Connected Domain
main D D \ AB
113
Instead of Γ, the boundary of D \ AB is ∂(D \ AB) = Γ ∪ AB ∪ Γ−1 ∪ BA,
−
where the sign − on Γ1 indicates that Γ1 is traversed in clockwise sense
(because the trigonometric sense for ∂(D \ AB) is the sense for which the
domain is always on the left).
Using Proposition 4.2.2, Iwe can now apply Theorem 4.2.1 (Cauchy’s Fun-
damental Theorem), hence f (z)dz = 0, i.e.
Γ
I Z I Z
f (z)dz + f (z)dz + f (z)dz + f (z)dz = 0.
Γ AB Γ1 − BA
Z Z I I
Since f (z)dz = − f (z)dz and f (z)dz = − f (z)dz, one
BA AB Γ1 − Γ1
obtains I I
f (z)dz = f (z)dz,
Γ Γ1
= f (z)dz + f (z)dz.
BM AB BAN B
Finally, we can apply the induction assumption for the first integral and
the case n = 1 for the second integral, hence
I n−1 I
X I
f (z)dz = f (z)dz + f (z)dz
Γ j=1 Γj Γn
Xn I
= f (z)dz.
j=1 Γj
114
(a) The n-uply Connected Domain D
I
Example 4.2.4. Let us calculate the integrals In = (z − a)n dz, where
Γ
n ∈ Z and a ∈ C is an interior point of the domain bounded by the closed
contour Γ (Figure 4.6 (a)).
Case I. If n ≥ 0, then the function (z − a)n is a polynomial, hence an
analytic function in
I C. By Theorem 4.2.1 (Cauchy’s Fundamental Theorem),
the integral In = (z − a)n dz = 0.
Γ I
1 1
Case II. If n = −1, i.e. I1 = dz, then the function is
Γ z −a z−a
analytic in the domain D \ {a} which is doubly connected ({a} is a lacuna).
Consider a circle of radius r centred at a, denotedI Γ1 (Figure 4.6
I (b)).
1 1
By Theorem 4.2.3, case n = 1, we have that dz = dz.
Γ z −a Γ1 z − a
115
(a) The Point a is an In- (b) A Circle Centred in a, Included
terior Point of D in D
116
I
1 f (z)
f (a) = dz. (4.6)
2πi Γ z−a
Proof. Consider an arbitrary constant > 0, fixed for the entire proof.
Since f is analytic in D, it is continuous
in D, hence f is continuous at
the point a ∈ D. Then there exists δ = δ > 0 such that ∀z ∈ D with
4π
|z − a| < δ we have that |f (z) − f (a)| < .
4π
Let r be arbitrary and fixed such that 0 < r < δ and let Γr be the circle
of radius r centred in a (Figure 4.7 (b)).
Figure 4.7: Suggestive Drawings for the Proof of the Theorem 4.2.5
Then, for any z ∈ Γr , we obtain that |z −a| = r < δ, hence |f (z)−f (a)| <
. It follows that
4π
f (z) − f (a)
z − a < 4πr .
117
Using Proposition 4.1.4 we have that
Z
f (z) − f (a) ≤ lΓr = 2πr = < .
dz
Γr z−a 4πr 4πr 2
Let us summarize what we have done so far. We have proved the following
statement: ∀ > 0, ∃δ > 0 such that ∀r > 0 with |r − 0| < δ we have that
I
f (z) − f (a)
dz − 0 < ,
Γr z−a
f (z) − f (a)
I
that is lim dz = 0.
r→0 Γ
r
z−a
f (z)
Now the function is analytic in the doubly connected domain given
z−a
by ∆ \ {a}. Using Theorem 4.2.3 we obtain that
The last integral is equal to 2πi (see Example 4.2.4), hence, by taking the
limit as r → 0, we get that
f (z) − f (a)
I I
f (z)
lim dz = lim dz + lim 2πif (a),
r→0 Γ z − a r→0 Γ z−a r→0
r
so I
f (z)
dz = 0 + 2πif (a) = 2πif (a). (4.7)
Γ z−a
The last formula is equivalent to (4.6) and the proof is done.
Remark 4.2.6. Cauchy’s Integral Formula (4.6) shows that the values of an
analytic function f over a closed contour Γ determine the values of f in the
whole domain bounded by Γ.
From a practical point of view, (4.7) gives a method to calculate integrals
with the indicated structure simply by multiplying the value of the numerator
at a with 2πi.
118
Remark 4.2.7. The function f being analytic in D, it is continuous in D.
Therefore lim f (u) = f (a). By (4.7) we have that
u→a
I I
f (z) f (z)
lim dz = 2πi lim f (u) = 2πif (a) = dz,
u→a Γ z−u u→a Γ z−a
i.e. Cauchy’s integral is continuous at a.
ω(a, h)
where lim = 0. Thus the derivative can be calculated by the usual
h→0 h
formula
f (a + h) − f (a)
f 0 (a) = lim .
h→0 h
Let us calculate the above ratio by using the integral representation (4.6)
of f . We obtain the following:
f (a + h) − f (a)
I I
1 1 f (z) 1 f (z)
= dz − dz
h h 2πi Γ z − a − h 2πi Γ z − a
I
1 1 1
= f (z) − dz
2πih Γ z−a−h z−a
I
1 h
= f (z) dz.
2πih Γ (z − a)(z − a − h)
Taking the limit as h → 0 (the limit exists since f is analytic (see Remark
4.2.7)) we have that
I
0 1! h
f (a) = f (z) dz,
2πi Γ (z − a)2
119
i.e. (4.8) is true for n = 1.
Now we prove the induction step. Let us assume that (4.8) is true for
k − 1 and we will show that it is true for k. Since f (k) (z) = (f (k−1) (z))0 , as
in case n = 1, f (k) (z) is the limit, as h → 0, of the ratio
f (k−1) (a + h) − f (k−1) (a)
.
h
Using the induction assumption, this ratio is equal to
1 (k − 1)! (k − 1)!
I I
f (z) f (z)
k
dz − k
dz =
h 2πi Γ (z − a − h) 2πi Γ (z − a)
(k − 1)!
I
1 1
= f (z) − dz.
2πih Γ (z − a − h)k (z − a)k
By the binomial formula
k k k k−1 k
(A − B) = A − A B + ··· + (−1)k B k
1 k
for ((z − a) − h)k one obtains
1 1 N
k
− k
= ,
(z − a − h) (z − a) (z − a − h)k (z − a)k
where
k kk k−1 k k−2 2 k
N = (z−a) −(z−a) + (z−a) h− (z−a) h +· · ·+ (−1)k hk ,
1 2 k
k
k−1 2
i.e. N = k(z − a) h + h C(h), where lim C(h) = − (z − a)k−2 . We can
h→0 2
reduce h and we obtain that
f (k−1) (a + h) − f (k−1) (a)
=
h
f (z)k(z − a)k−1
I
(k − 1)!
I
f (z)
= k k
dz + hC k k
dz .
2πi Γ (z − a − h) (z − a) Γ (z − a − h) (z − a)
Taking the limit as a → 0, one obtains (since k(k − 1)! = k!)
I
(k) k! f (z)
f (a) = dz,
2πi Γ (z − a)k+1
hence (4.8) is true for n ≥ 1.
120
Remark 4.2.9. The above theorem shows that an analytic function f has
derivatives of any order and the values of the function over a contour Γ
determine the values of all derivatives in the whole domain bounded by Γ.
From a practical point of view, (4.8) gives the following formula for inte-
grals of this structure
I
f (z) n! (n)
n+1
dz = f (a). (4.9)
Γ (z − a) 2πi
1 − z n+1
Sn = 1 + z + z 2 + · · · + z n = , for z 6= 1.
1−z
For z = 1, Sn = n + 1 is a divergent sequence, hence the series is divergent.
Consider the sequence (z n )n≥1 . We have the following three possibilities:
Therefore, for |z| < 1 the geometric series is convergent and its sum is
1 − z n+1 1
S = lim Sn = lim = .
n→∞ n→∞ 1 − z 1−z
121
We have obtained that X 1
zn = . (4.10)
n≥0
1−z
For |z| ≥ 1 the geometric series is divergent.
n n
Xany r ∈ (0, 1), in the disk
Moreover, for X|z| ≤ r we have that |z | ≤ r
and the series rn is convergent, hence z n is uniformly convergent in
n≥0 n≥0
any disk |z| ≤ r, thus the geometric series can be derived or integrated term
by term.
A result that is specific to the complex framework is the following theo-
rem.
Theorem 4.3.2 (Weierstrass). If the functions fn (z), n ∈ N∗ are analytic in
∞
X
a domain D and the series fn (z) is uniformly convergent to the function
n=1
f (z) in any closed subdomain D1 of D, then
i) f (z) is analytic in the domain D;
∞
X
ii) f (m)
(z) = fn(m) (z), ∀m ∈ N∗ ;
n=1
∞
X
iii) the series fn(m) (z) is uniformly convergent in any closed subdomain
n=1
D1 of D.
122
Moreover,
f (n) (a)
cn = , n ∈ N. (4.12)
n!
Proof. Let z ∈ DR (a). Consider a disk Dρ (a) with ρ < R and let Γρ be its
boundary, i.e. the circle of radius ρ centred at a (Figure 4.8 (b)).
Since f is analytic, we can represent the value f (z) using Theorem 4.2.5
(Cauchy’s Integral Formula). We get that
I
1 f (u)
f (z) = du. (4.13)
2πi Γρ u − z
123
∞
z − a
< 1 and we can use the geometric series 1 =
X
u − a q n with the
1−q n=0
z−a
convergence condition |q| < 1, where q = .
u−a
1
We will expand in a power series. We have that
u−z
1 1 1 1
= = ·
u−z u − a − (z − a) u−a 1− −a z
u−a
∞ n X ∞
1 X z−a (z − a)n
= = n+1
.
u − a n=0 u − a n=0
(u − a)
∞ n
z − a |z − a| X |z − a|
Since = < 1, the real series is convergent
u − a ρ n=0
ρ
∞ n
X z−a
(again using the geometric series) and it dominates the series ,
n=0
u − a
hence the latter is uniformly convergent for u ∈ Γρ . Then we can replace
∞
1 X (z − a)n
by n+1
in (4.13) and we can integrate term by term. One
u−z n=0
(u − a)
obtains
I
1 f (u)
f (z) = du
2πi Γρ u − z
∞
(z − a)n
I
1 X
= f (u) du
2πi Γρ n=0
(u − a)n+1
∞ I !
X 1 f (u)
= n+1
du (z − a)n .
n=0
2πi Γρ (u − a)
I
1 f (u)
By denoting cn = du, one obtains
2πi Γρ (u − a)n+1
∞
X
f (z) = cn (z − a)n .
n=0
124
f (n) (a)
hence cn = .
n!
f (z)
The function is analytic on the so called punctured disk given
(z − a)n+1
by DR (a) \ {a}, which is a simply connected domain. Then, for any closed
contour Γ ⊂ DR (a) such that a belongs to the domain bounded by Γ (Figure
4.8 (c)), one obtains from Theorem 4.2.3 (Cauchy’s Theorem for Multiply
Connected Domains) the following formula:
I
1 f (z)
cn = dz. (4.14)
2πi Γ (z − a)n+1
∞
z
X zn
2. e = , z ∈ C;
n=0
n!
∞
X (−1)n 2n+1
3. sin z = z , z ∈ C;
n=0
(2n + 1)!
∞
X (−1)n
4. cos z = z 2n , z ∈ C.
n=0
(2n)!
125
Theorem 4.3.5. If f is an analytic function in DR (a)\{a}, then there exists
cn ∈ C, n ∈ Z, such that
∞
X
f (z) = cn (z − a)n , ∀z ∈ DR (a) \ {a}. (4.15)
n=−∞
Proof. Let z be any point in DR (a) \ {a}. Consider an annulus Aρ,r (a)
bounded by the circles Γρ and Γr centred at a, with r < ρ < R such that
z ∈ Aρ,r (a) (Figure 4.9 (b)).
y y
DR (a) DR (a)
Aρ,r (a)
Γρ
× × Γr
a a
z
x x
0 0
(a) An Isolated Singular Point (b) The Annulus
y y
DR (a) DR (a)
Γρ Γρ
z × Γr Γr ×
Γ
a ∆ a
B
A
x x
0 0
(c) Transformation into a Simply Con- (d) A Arbitrary Curve in the Annulus
nected Domain
126
hence I I
1 f (u) 1 f (u)
f (z) = du − du. (4.16)
2πi Γρ u−z 2πi Γr u−z
From the proof of Theorem 4.3.3 we get that
I ∞
1 f (u) X
du = cn (z − a)n ,
2πi Γρ u − z n=0
where I
1 f (z)
cn = dz (4.17)
2πi Γ (z − a)n+1
and Γ is an arbitrary closed contour which encircles the point a (see Figure
4.9 (d)).
1
For the second integral in (4.16) we will write the ratio − as a
u−z
series (using the geometric
series).
Since u ∈ Γr and z ∈ Aρ,r (a), then
u − a
|u − a| < |z − a|, hence < 1.
z − a
u−a
The convergence condition, |q| < 1 for q = , holds and
z−a
1 1 1 1 1
− = = =
u−z z−u z − a − (u − a) z −a1− u−a
z−a
∞ n X ∞ n
1 X u−a (u − a)
= = .
z − a n=0 z − a n=0
(z − a)n+1
In order to unify the results that we have obtained so far with (4.17), we
change the index of summation by defining m to be −n − 1. So we have that
m = −(n + 1), m = −1 for n = 0 and m → −∞ when n → ∞. Therefore
−∞ −1 −1
1 X (u − a)−(m+1) X (z − a)m X (z − a)n
− = = = .
u − z m=−1 (z − a)−m m=−∞
(u − a)m+1 n=−∞ (u − a)n+1
1
If we replace − in (4.16) and if we integrate term by term, then
u−z
I −1 I
1 f (u) X 1 f (u)
du = n+1
du (z − a)n ,
2πi Γr u − z n=−∞
2πi Γr (u − a)
127
hence
I −1
1 f (u) X
− du = cn (z − a)n ,
2πi Γr u−z n=−∞
where I
1 f (z)
cn = dz. (4.18)
2πi Γr (z − a)n+1
so ∞
X
f (z) = cn (z − a)n , (4.20)
n=−∞
where I
1 f (z)
cn = dz, n ∈ Z. (4.21)
2πi Γ (z − a)n+1
Remember that Γ is any closed contour such that a is the unique singular
point of f in the domain bounded by Γ.
The series (4.20) is called the Laurent Series of the function f in a punc-
tured neighborhood of a (or about a).
X∞ X−1
n
The series cn (z − a) and cn (z − a)n in (4.19) are called the
n=0 n=−∞
Taylorian Part and the Principal Part of the Laurent series, respectively.
With respect to the Taylorian part and the principal part we will be a able
to classify singular points. This follows in the next section.
128
4.4 Classification of Singular Points
Let a be a singular point of function f , i.e. f is not analytic at a.
1 z z3 z5 2n+1
sin z n z
f (z) = = − + + · · · + (−1) + ···
z z 1 3! 5! (2n + 1)!
z2 z4 z 2n
=1− + + · · · + (−1)n + ··· .
3! 5! (2n + 1)!
Therefore the principal part is null.
Poles
Definition 4.4.3. The point a is called a pole of order k of f , k ∈ N∗ , if
there exists a function h, analytic in a disk DR (a) with h(a) 6= 0 such that
h(z)
f (z) = .
(z − a)k
A pole of order k = 1 is said to be a simple pole.
Since h is analytic, it has a Taylor series expansion on DR (a), so
h(z) = c−k + c−k+1 (z − a) + · · · + c−1 (z − a)k−1 + c0 (z − a)k + c1 (z − a)k+1 + · · ·
and 0 6= h(a) = c−k . Then the Laurent series of f is
c−k c−k+1 c−1
f (z) = + + · · · + + c0 + c1 (z − a) + · · · ,
(z − a)k (z − a)k−1 (z − a)
hence the principal part of the Laurent series has a finite number of terms.
129
Essential Singular Points
Definition 4.4.4. The point a is called an essential singular point of f if it
is neither a removable singular point, nor a pole of f , hence if and only if the
principal part of the Laurent series of f has an infinite number of nonzero
terms.
1
Example 4.4.5. The function f (z) = e z−3 has z = 3 as an isolated singular
∞
1 z
X zn
point. By replacing z with into the exponential series e = , one
z−3 n=0
n!
1
obtains the Laurent series with the coefficients c−n = , n ∈ N∗ given by
n!
1 1 1 1
e z−3 = 1 + + 2
+ ··· + + ··· .
1!(z − 3) 2!(z − 3) n!(z − 3)n
It is obvious that the principal part contains an infinite number of nonzero
terms, hence z = 3 is an essential singular point of f .
There exist functions with non-isolated singular points. For instance, the
1 1 1 1
function f (z) = π has the singular points 0, 1, 2 , . . . , n , . . . and n→∞
lim =
sin n
z
0, therefore there is no punctured disk DR (0) \ {0} so that f is analytic in
it, i.e. the singular point 0 is not an isolated point.
Remark 4.4.6. So far we have discussed about Laurent series in a punctured
neighborhood of a singular point a, which can be considered as an annulus
AR,0 (a) = {z ∈ C : 0 < |z − a| < R}. The previous discussion can be
extended to different annuli and one obtains different Laurent series.
1
Example 4.4.7. Let us consider the function f (z) = 2 and
z (z − 1)(z − 2)
split this ratio into simple fractions. So
1 A B C D
= + 2+ + ,
z 2 (z − 1)(z − 2) z z z−1 z−2
3 1 1
where A = , B = , C = −1 and D = .
4 2 4
We will find Laurent series expansion in several situation using the Taylor
1
series expansion of = 1 + q + q 2 + · · · + q n + · · · , with the convergence
1−q
condition |q| < 1.
130
1. If |z| < 1, i.e. z ∈ A1,0 (0), we have that
3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2+ − · .
4 z 2 z 1−z 4 2 1− z
2
z z |z| 1
Considering q = z, respectively q = , so = < < 1, it follows
2 2 2 2
that
zn
3 1 1 1 n 1
f (z) = · + · 2 + 1 + · · · + z + · · · − 1 + ··· + n + ···
4 z 2 z 8 2
3 1 1 1 7 15 1
= · + · 2+ + · z + · · · + 1 − n+3 z n + · · · .
4 z 2 z 8 16 2
3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2− − · .
4 z 2 z 1 4 2 1− z
z 1− 2
z
z |z|
1 1 z
For q = , so 1 < |z| ⇔ < 1, respectively q = , so = <1
z z 2 2 2
and we obtain that
3 1 1 1 1 1 1 1
f (z) = · + · 2 − 1 + + 2 + ··· + n + ··· −
4 z 2 z z z z z
2 n
1 z z z
− 1 + + 2 + ··· + n + ···
8 2 2 2
1 1 1 1 1 1 1
= · · · − n+1 − n − · · · − 3 − · 2 − ·
z z z 2 z 4 z
1 1 1 n
− − · z − · · · − n+3 · z − · · · .
8 16 2
131
3. If |z| > 2, i.e. z ∈ A∞,2 (0), we have that
3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2− + · .
4 z 2 z 1 4 2
z 1− z 1−
z z
1 1 1
Now, since |z| > 2, we can take q = , so < < 1, respectively
z z 2
2 2
q = , so < 1, one obtains
z z
3 1 1 1 1 1 1 1
f (z) = · + · 2 − 1 + + 2 + ··· + n + ··· +
4 z 2 z z z z z
2 n
1 2 2 2
+ 1 + + 2 + · · + n + ···
4z z z z
1 1 1 1
= · · · + (2n−2 − 1) n+1 + (2n−3 − 1) n + · · · + 0 · 2 + 0 · .
z z z z
132
4.5 Exercises
E 21. Determine the Laurent series expansion of function f in the following
situations:
1 1
a) f (z) = + 2 , if |z| < 2;
z−2 z
2
b) f (z) = , if |z − 2i| > 4;
z2 + 4
2
ez
c) f (z) = 5 , around a = 0;
z
cosh z
d) f (z) = , if |z| < 1.
z−1
Solution. a) The Laurent series expansion of the function f in the case
1
|z| < 2 is obviously around a = 0. We have that 2 is already a term
z
1
of this expansion belonging to the principal part and for we will use
z−2
the geometric series. We notice that the condition of convergence of the
∞
1 X
geometric expansion = q n is |q| < 1. Since we have the condition
1−q
z n=0
|z| < 2, it follows that < 1, so we can use the geometric expansion for
2
z
q = . Hence
2
z z2 zn
1 1 1 1
=− · =− 1 + + 2 + ··· + n + ··· .
z−2 2 1 − z2 2 2 2 2
We obtain that
1 1 z zn
f (z) = − − − · · · − − ··· ,
z2 2 4 2n+1
so the conclusion is that the principal part of the Laurent series expansion
of the function f has only one term and the Taylor part has an infinity of
terms, under the specified conditions;
b) Taking into consideration the restriction |z −2i| > 4, it follows that the
Laurent series expansion of the function f is to be found around the point
133
∞
1 X
at infinity. We use the alternate geometric expansion = (−1)n q n ,
1+q n=0
if |q| < 1. We first split the function into simple fractions; so
2 2 A B
f (z) = 2 = = + ,
z +4 (z − 2i)(z + 2i) z − 2i z + 2i
i i
where A = − and B = , so
2 2
i 1 1
f (z) = − .
2 z + 2i z − 2i
1
We have now two simple fractions, but is a term of the principal
z − 2i
1
part of the Laurent series expansion. For we want to use the alternate
z + 2i
geometric expansion, but we need to see for what q this is possible. The
4 4i
condition |z − 2i| > 4 leads to < 1, or < 1. Since the
|z − 2i| z − 2i
4i
condition of convergence is |q| < 1, it follows that a suitable q is q = .
z − 2i
1
We transform the ratio as follows
z + 2i
1 1 1 1
= = · .
z + 2i (z − 2i) + 4i z − 2i 4i
1+
z − 2i
Now, using the alternate geometric expansion, we get that
1 1 1
= ·
z + 2i z − 2i 4i
1+
z − 2i
42 i2 (−1)n 4n in
1 4i
= 1− + − ··· + + ···
z − 2i z − 2i (z − 2i)2 (z − 2i)n
1 4i (−1)n 4n in
= − + · · · + + ··· ,
z − 2i (z − 2i)2 (z − 2i)n+1
so
(−1)n 4n in
i 4i
f (z) = − + ··· + + ···
2 (z − 2i)2 (z − 2i)n+1
2 (−1)n 4n in+1
= + · · · + + ··· .
(z − 2i)2 2(z − 2i)n+1
134
Let us notice that in this situation there is no Taylor part, only a principal
part.
2
Another method is to isolate the singularity by writing f (z) = ·
z − 2i
1
and to multiply the expansion of the second fraction by the first one;
z + 2i
c) Consider the Taylor series expansion around a = 0 of the exponential
z z2 zn 2
function ez = 1 + + + ··· + + · · · , ∀z ∈ C. By replacing z with ,
1! 2! n! z
one obtains
2 2 22 2n
ez = 1 + + 2 + ··· + n + ···
z1! z 2! z n!
and
2
ez 1 2 22 2n
f (z) = = + + + · · · + + ··· ,
z5 z 5 z 6 1! z 7 2! z n+5 n!
which is the expansion needed, again only with an infinity of terms in the
principal part, but none into the Taylor part.
1
d) We write the function f as f (z) = cosh z · .
z−1
ez + e−z
Since cosh z = and using the Taylor series expansion of ez around
2
0, it follows that
zn (−1)n z n
z z
1 + + ··· + + ··· + 1 − + ··· + + ···
1! n! 1! n!
cosh z =
2
z2 z 2n
2+2· + ··· + 2 · + ···
2! (2n)!
=
2
z2 z 2n
=1+ + ··· + + · · · , ∀z ∈ C.
2! (2n)!
1
On the other hand, if |z| < 1, we have that admits a Taylor series
z−1
expansion around 0,
1 1
=− = −1 − z − z 2 − · · · − z n − · · · .
z−1 1−z
135
Hence
z2 z 2n
−1 − z − z 2 − · · · − z n − · · · .
f (z) = 1+ + ··· + + ···
2! (2n)!
1 1
It follows that cn = −1 − ··· − − · · · = − cosh 1, c−n = 0, ∀n ∈ Z,
2! (2n)!
n > 0 and c0 = −1.
We have obtained that the Taylor part has an infinite number of terms
and the principal part has none (since f is analytic as being the product of
two functions which are both analytic for |z| < 1).
136
b) We obtain that
(4i)2 (4i)3 (−1)n (4i)n
i 4i
f (z) = − + − + ··· + + ··· ;
2 (z − 2i)2 (z − 2i)3 (z − 2i)4 (z − 2i)n+1
c) We get that
1 1 1 (−1)n
f (z) = − + − · · · + + ··· ;
z 2 3!z 4 5!z 6 (2n + 1)!z 2n+2
e) We have that
1 1 1
= = .
sin z sin ((z − π) + π) − sin(z − π)
By replacing z with z − π in the Taylor series expansion of sin z, one
z − π (z − π)3 (z − π)2n+1
obtains sin(z − π) = − + · · · + (−1)n + · · · , hence
1! 3! (2n + 1)!
1 −1 (z − π)3 (z − π)2n+1
= − + · · · + (−1)n + ···
sin z (z − π) 3! (2n + 1)!
1 1
=− · 2 2n .
z−π (z − π) n (z − π)
1− + · · · + (−1) + ···
3! (2n + 1)!
Since z = π is a first order pole of function f , it follows that
1
2
(z − π) (z − π)2n
1− + · · · + (−1)n + ···
3! (2n + 1)!
has to be the Taylor series expansion, so
1
2 =
(z − π) (z − π)2n
1− + · · · + (−1)n + ···
3! (2n + 1)!
137
= a0 + a1 (z − π) + a2 (z − π)2 + · · · + an (z − π)n + · · · ,
hence
(a0 + a1 (z − π) + · · · + an (z − π)n + · · · ) ·
(z − π)2 2n
n (z − π)
· 1− + · · · + (−1) + ··· = 1
3! (2n + 1)!
and identifying the coefficients, one obtains
a0
a0 = 1, a1 = 0, a2 − = 0, . . . .
3!
I
E 22. Calculate f (z)dz in the following cases:
Γ
cosh z
a) f (z) = , Γ : |z| = 2;
(z − 1)2
e2z + z 2
b) f (z) = , Γ : |z − i| = 1.
z2 + 3
138
e2z + z 2 g(z)
Considering g(z) = √ , one can write f (z) = √ . Since g is
z+i 3 z−i 3
an analytic function in D1 (i), using (4.7), one obtains
I I
g(z) √
f (z)dz = √ dz = 2πig(i 3)
|z−i|=1 |z−i|=1 z − i 3
√
e2i 3 − 3 π √
= 2πi √ = √ (e2i 3 − 3).
2i 3 3
139
140
Chapter 5
Residue Theory
141
Figure 5.1: The Unique Singular Point a in the Domain Bounded by Γ
Figure 5.2: The Possible Cases of Singular Points in the Residue Theorems
142
hence I
f (z)dz = 2πi · res (f, aj ).
Γj
143
Figure 5.3: The Isolated Singular Points of f
ez + z
Example 5.2.1. Let us consider f (z) = . We have that z = 1 is
z−1
a simple pole of the function f since h(z) = ez + z is analytic in C and
h(1) = e + 1 6= 0. We obtain that
res (f, 1) = lim(z − 1)f (z) = lim(ez + z) = e + 1.
z→1 z→1
144
ez + z
Example 5.2.2. Now, let us take f (z) = . We have that z = 2 is
(z − 2)2
a pole of order 2 of the function f since h(z) = ez + z is analytic in C and
h(2) = e2 + 2 6= 0. We obtain that
1
res (f, 2) = lim [(z − 2)2 f (z)]0 = lim(ez + z)0 = lim(ez + 1) = e2 + 1.
1! z→2 z→2 z→1
It is not necessary for a function to have just one isolated singular point
of pole type.
ez + z
Example 5.2.3. Consider the function f (z) = . Be-
(z − 2)(z 2 − 3z + 2)
cause we have that z 2 − 3z + 2 = (z − 1)(z − 2), it follows that f has two
isolated singular points z1 = 1 and z2 = 2.
ez + z
Since z = 1 is a simple order pole of f (h(z) = is analytic in
(z − 2)2
DR (1), 0 < R < 1, h(1) = e + 1 6= 0), we obtain that
ez + z e+1
res (f, 1) = lim(z − 1)f (z) = lim = = e + 1.
z→1 z→1 (z − 2)2 1
ez + z
On the other hand, z = 2 is a pole of order 2 of function f (h(z) =
z−1
is analytic in DR (2), 0 < R < 1, h(2) = e2 + 2 6= 0), so
1
res (f, 2) = lim [(z − 2)2 f (z)]0
1!
z→1
0
ez + z
= lim
z→2 z − 1
P (z)
Consider the case of a simple pole a of a function f (z) = , where P
Q(z)
and Q are analytic functions in a disk DR (a). Then there exists a function
Q1 which is analytic in DR (a) that satisfies Q1 (a) 6= 0 such that Q(z) =
145
Q1 (z)(z − a). By deriving it, Q0 (z) = Q1 0 (z)(z − a) + Q1 (z) and for z = a,
we get Q0 (a) = Q1 (a).
By (5.6)(2.4.27) we have that
P (a)
res (f, a) = . (5.7)
Q0 (a)
sin z
Example 5.2.4. The function f (z) = tan(z) = has the poles
cos z
π
zk = + kπ, k ∈ Z (the roots of cos z = 0).
2
π
0
Since (cos z) = − sin z and sin + kπ = (−1)k 6= 0, then zk are simple
2
poles. Obviously (5.7) is preferable to (5.6) and
sin z sin z
res (f, zk ) = = = −1.
(cos z)0 zk = π2 +kπ − sin z zk = π2 +kπ
146
Lemma 5.3.1. Let ΓR be an arc of a circle of radius R, centred at a point
a (Figure 5.4).
ΓR
z
R
I
ii) If lim (z − a)f (z) = 0, then lim f (z)dz = 0.
|z−a|→∞ R→∞ ΓR
Proof. i) Let > 0 be an arbitrary number, fixed for the entire proof.
Since
we have that lim (z − a)f (z) = 0, then there exists δ > 0, δ = δ such
z→a 4π
that, for every z with |z − a| < δ, we get that |(z − a)f (z)| < .
4π
Let R be an arbitrary number, fixed for the rest of the proof such that
0 < R < δ. Then, for every z ∈ ΓR , we obtain that |z − a| = R < δ, which
implies that |(z − a)f (z)| < . It follows that for every z ∈ ΓR , we have
4π
that
(z − a)f (z)
|f (z)| = < .
z − a 4πR
147
We have proved that for every > 0, there
I exists δ > 0 such that for
every R > 0 with |R − 0| < δ, we have that f (z)dz − 0 < , i.e.
ΓR
I
lim f (z)dz = 0.
R→0 ΓR
ii) The proof is similar to i) with the changes |z − a| > δ and R > δ, since
R = |z − a| → ∞.
The following figure(s) will turn out to be the key of understanding the
next result (Lemma5.3.2).
y y
(z) (z)
ΓR z
R
θ θ
x x
0 R 0 R
R
z
ΓR
ΓR ΓR
148
respect to θ = arg z ∈ [0, π], then
I
lim f (z)eiαz dz = 0, ∀α > 0.
R→∞ ΓR
149
y
C (z)
Γ ×
aj
× ×
a1 an
x
A(−R) 0 B(R)
Z ∞ Z ∞
Integrals of the form f (x) cos(αx)dx and f (x) sin(αx)dx, α > 0
−∞ −∞
150
lim f (z) = 0 uniformly in θ = arg z ∈ [0, π], then the improper integrals
|z|→∞
from above exist and
n
!
Z ∞ X
I1 = f (x) cos(αx)dx = Re 2πi res (f (z)eiαz , aj ) ,
−∞ j=1
n
! (5.10)
Z ∞ X
I2 = f (x) sin(αx)dx = Im 2πi res (f (z)eiαz , aj ) .
−∞ j=1
and one applies the method from the previous case to the last integral.
Using Lemma 5.3.2 i), we get that
n
X
I1 + iI2 = 2πi res (f (z)eiαz , aj ),
j=1
Remark 5.4.3. If α < 0, one uses Lemma 5.3.2 ii) (see Figure 5.5 (b)) and
one obtains, due to the trigonometric sense, the following formulas
Z ∞ n
!
X
I1 = f (x) cos(αx)dx = −Re 2πi res (f (z)eiαz , bj ) ,
−∞ j=1
n
! (5.11)
Z ∞ X
I2 = f (x) sin(αx)dx = −Im 2πi res (f (z)eiαz , bj ) ,
−∞ j=1
151
Z 2π
Integrals of the form f (cos x, sin x)dx
0
In this case we have that f is a rational function. One uses the change
of variable z = eix . Since |z| = 1 and x ∈ [0, 2π), z describes the unit circle
(Figure 5.7).
y
(z)
× z
aj
×
a1
x 1 x
0
×
an
Figure 5.7: The Unit Circle and the Interior Singular Points of f
z2 + 1 z2 − 1 1
where g(z) = f , is a rational function and aj are the
2z 2iz iz
singular points of f in the unit circle.
152
Z ∞
1
Example 5.4.4. Let us compute I = dx.
−∞ (3 + x2 )4
1
The complex function f (z) = has two isolated singular points
√ (3 + z 2 )4 √
2
z = ±i 3 (the solutions of the equation√ z √+3 = 0). Only
√ z = i 3 is located
in the upper half-plane, since Im (i 3) = 3 > 0. As i 3 is a pole of order
4 of the function f , it follows that
000 000
√ √ 4
1 1 1 1
res (f, i 3) = lim (z − i 3) 2 = lim √
3! z→i√3 (z + 3)4 6 z→i√3 (z + i 3)4
00 0
1 −4 1 20
= lim √ = lim √
6 z→i√3 (z + i 3)5 6 z→i√3 (z + i 3)6
1 −120 1 −120 5
= lim√ √ = · √ = √ · i,
6 z→i 3 (z + i 3)7 6 (2i 3)7 32 · 27 · 3
√ 5π
so I = 2πi · res(f, i 3) = √ .
432 3
Z ∞
cos 3x
Example 5.4.5. Let us compute I = dx.
0 (x2
+ 9)(x2 + 1)
cos 3x
First, let us notice that the function f (x) = 2 is even, so
(x + 9)(x2 + 1)
Z ∞
1 ∞
Z
cos 3x cos 3x
I= dx = dx.
0 (x2 + 9)(x2 + 1) 2 −∞ (x2 + 9)(x2 + 1)
e3iz
The complex function f (z) = has four isolated singular
(z 2 + 9)(z 2 + 1)
points z1,2 = ±i and z3,4 = ±3i, but only i and 3i are located in the upper
half-plane.
The value of the integral is
Z ∞
cos 3x
2 2
dx = Re (2πi(res (f, i) + res (f, 3i))
−∞ (x + 9)(x + 1)
153
e3iz
z 2 +1
e3iz e−9
res (f, 3i) = = = − .
2 0
(z + 9) z=3i 2z(z 2 + 1) z=3i 48i
We obtain that
Z ∞ −3
e−9 π(3e−3 − e−9 )
cos 3x e
2 2
dx = Re 2πi − = ,
−∞ (x + 9)(x + 1) 16i 48i 24
π(3e−3 − e−9 )
hence I = .
48
Z 2π
cos x
Example 5.4.6. Let us calculate I = dx.
0 5 + 3 sin x
Using the change of variable z = eix , we have that
x ∈ [0, 2π) → |z| = 1 (closed circle)
eix + e−ix z + z1 z2 + 1
cos x = = =
2 2 2z
eix
− e −ix z − z1 z 2
−1
sin x = = =
2i 2i 2iz
dz = ieix dx = izdx ⇒ dx = dz
iz
The integral becomes
z2 + 1
z2 + 1
I I
2z dz
I= = dz.
|z|=1 z 2 − 1 iz 2
|z|=1 z(3z + 10iz − 3)
5+3
2iz
z2 + 1
Denoting by g(z) = , one finds that the function g has
z(3z 2 + 10iz − 3)
three singular points: z = 0, z = −3i and z = − 3i (the solutions of the
equation z(3z 2 + 10iz − 3) = 0). We need to compute the residue only at
points inside the disk D1 (0): z = 0 and z = − 3i . Both are poles of order 1.
Since
z2 + 1 1
res (g, 0) = 2 =− ,
3z + 10iz − 3) z=0 3
z2 + 1 z2 + 1
i 1
res g, − = = = ,
3 3 2 0
(3z + 10iz − 3z) z=− 3 2
(9z + 20iz − 3) z=− 3 3
i
i
i
the integral is I = 2πi res (g, 0) + res g, − = 0.
3
154
5.5 Exercises
E 23. Calculate the residues of the following function:
sin(πz)
a) f (z) = ;
z(z 2 + 4)
eiz
b) f (z) = ;
z 3 − 3z + 2
cos z1
c) f (z) = ;
z2 + 1
πz
d) f (z) = (z + 1) cos .
z+1
sin(πz)
Solution. a) The isolated singular points of the function f (z) =
z(z 2 + 4)
are z = 0, z = 2i and z = −2i (these points are obtained from the equation
z(z 2 + 4) = 0).
sin(πz) 0
Since lim 2
produces the undetermined case , we first need to
z→0 z(z + 4) 0
eliminate the indeterminacy and we are going to do this using the remarkable
sin z
limit lim = 1. We arrange the terms of the limit as
z→0 z
sin(πz) sin(πz) π π
lim 2
= lim · lim 2 =1· ,
z→0 z(z + 4) z→0 πz z→0 z + 4 4
which is a finite number, so z = 0 is a removable point of the function f and
res (f, 0) = 0.
sin(πz) sin(2πi) 6= 0
In the case of lim , the limit is = = ∞. We count
z→2i z(z 2 + 4) 0 0
how many zeros are obtained at the denominator and in this case there is
only one zero, hence z = 2i is a simple pole. It follows that
sin(πz) sin(πz) sin(2πi)
res (f, 2i) = 2 0
= 2 = .
[z(z + 4)] z=2i 3z + 4 z=2i −8
155
Remark : The poles z = 2i and z = −2i are conjugated complex numbers
and their residues are also conjugated numbers.
b) We notice that z 3 − 3z + 2 = (z − 1)2 (z + 2) (one can use Horner’s rule
to decompose it), so the isolated singular points of f are z = 1 and z = −2.
eiz eiz 6= 0
Since lim 3 =lim 2
= 2 = ∞, one obtains that
z→1 z − 3z + 2 z→1 z − 1) (z + 2) 0
z = 1 is a pole of order 2 and
0 iz 0
eiz
1 2 e
res (f, 1) = lim (z − 1) 2
= lim
1! z→1 (z − 1) (z + 2) z→1 z+2
iz iz i
ie (z + 2) − e e (3i − 1)
= lim 2
= .
z→1 (z + 2) 9
eiz eiz 6= 0
Since lim = lim = = ∞, hence z = 1 is
z→−2 z 3 − 3z + 2 z→−2 (z − 1)2 (z + 2) 0
a simple pole and
Remark : One can use to compute the residue in a simple pole either (5.6),
either (5.7).
c) The isolated singular points of f are z = 0 (inside cosine there is a
ratio with the denominator z) and z = ±i (from the equation z 2 + 1 = 0).
cos 1
When we evaluate lim 2 z , we obtain that the limit does not exist,
z→0 z + 1
so z = 0 is an essential singularity. This is the most difficult situation for
computing the residue since we need to find the coefficient c−1 of the Laurent
series expansion of f around the essential singularity.
1
In our case, using the expansion of cos z for z → and the alternate
z
2
geometric series with q = z , |q| < 1, we get that
cos z1
1 1
2
= cos · 2
z +1 z z +1
1 1 n 1
= 1− + + · · · + (−1) + ···
2!z 2 4!z 4 (2n)!z 2n
· (1 − z 2 + z 4 + · · · (−1)n z 2n + · · · ).
156
1
We know that c−1 is the coefficient of obtained from the above product
z
and it is 0. It follows that res (f, 0) = 0.
Both singular points z = i and z = −i are simple poles and using (5.6)
cos i cos i
or (5.7), one finds res (f, i) = and res (f, −i) = − ;
2i 2i
d) We have that z = −1 is the only isolated singular point of f and it is
an essential point. In order to find the Laurent series expansion of f around
z = −1, we use that
πz π(z + 1 − 1) π π
cos = cos = cos π − = − cos .
z+1 z+1 z+1 z+1
π
Using the Taylor series expansion of cos z for z → , one obtains
z+1
π2 (−1)n π 2n
f (z) = −(z + 1) 1 − + ··· + + ···
2!(z + 1)2 (2n)!(z + 1)2n
π2 π 2n
= −(z + 1) + + · · · + (−1)n+1 + ··· ,
2!(z + 1) (2n)!(z + 1)2n−1
π2 π2
hence c−1 = and also res (f, −1) = .
2! 2!
z4 + 1
a) f (z) = ;
(z − 1)(z 2 − 1)
1
cosh
z
b) f (z) = 2 ;
z − 3z + 2
1
e− z + z
c) f (z) = ;
(z − 1)2
1
d) f (z) = ;
5 − 4 cos z
z2
e) f (z) = ;
1 − cos z
157
z
f) f (z) = .
z 100 + 2100
and since
1
cosh
z 1 1 1
= cosh −
z 2 − 3z + 2 z z−2 z−1
1 1
= 1+ + ··· + + ···
2!z 2 (2n)!z 2n
1 1 1 n
· 1− + 1 − 2 z + · · · + 1 − n+1 z + · · · ,
2 2 2
it follows that
1 1 1 1 1 1
res (f, 0) = 1− 2 + 1 − 4 + ··· + 1 − 2n + · · ·
2! 2 4! 2 (2n)! 2
1 1 1 1 1
= + ··· + + ··· − + + ··· + + ···
2! (2n)! 2!22 4!24 (2n)!22n
= cosh 1 − 1 − (cosh 2 − 1) = cosh 1 − cosh 2;
158
but z = 0 is an essential singularity. Since
1
e− z + z 1 1 0
−z
= e +z ·
(z − 1)2 1−z
(−1)n
1 1
= z+1− + − ··· + + ···
1!z 2!z 2 n!z n
· (1 + 2z + 3z 2 + · · · + (n + 1)z n + · · · ),
we obtain that
1 2 3 (−1)n n
res (f, 0) = − + − + ··· + + ···
1! 2! 3! n!
1 1 1 (−1)n
= −1 + − + − · · · + = −e−1 ;
1! 2! 3! (n − 1)!
159
f) The equation z 100 = −2100 has one hundred solutions, namely
(2k + 1)π (2k + 1)π
zk = 2 cos + i sin , k = 0, 99,
100 100
zk zk2 zk2
res (f, zk ) = = = − .
100zk99 100zk100 100 · 2100
I
E 24. Calculate f (z)dz in the following cases:
Γ
cosh z
a) f (z) = , Γ : |z| = 2;
(z − 1)2
e2z + z 2
b) f (z) = , Γ : |z − i| = 1;
z2 + 3
sin z
c) f (z) = , Γ : |z − 2| = 2;
z 2 (z− 3)
z2
d) f (z) = , Γ : |z| = 4;
(z 2 + 1)3
1
e− z
e) f (z) = 2 , Γ : x2 + 4y 2 = 4.
z +z
Solution. The first and the second integral have already been solved in the
previous chapter using Theorem 4.2.8 and formula (4.8) (Cauchy’s Integral
Formula for Derivatives) and Theorem 4.2.5 and formula (4.7) (Cauchy’s
Integral Formula).
a) We have that |z| = 2 is the circle of radius 2, centred at the origin.
Also D2 (0) is a simply connected domain, bounded by Γ. We obtain that
z = 1 is a pole of order 2 of the function f in D2 (0) and
0
cosh z
res (f, 1) = lim (z − 1) 2
= lim(cosh z)0 = lim sinh z = sinh 1,
z→1 (z − 1)2 z→1 z→1
160
hence, from (5.2), we get that
I
cosh z
2
dz = 2πires (f, 1) = 2πi sinh 1;
|z|=2 (z − 1)
161
The isolated singular points of f are z = ±i, both third order poles and
both belonging to D4 (0). We get that
00 00
z2 z2
1 3 1 3
res (f, i) = lim (z − i) 2 = lim (z − i)
2! z→i (z + 1)3 2 z→i (z − i)3 (z + i)3
00 0
z2 2zi − z 2
1 1
= lim = lim
2 z→i (z + i)3 2 z→i (z + i)4
1 (2i − 2z)(z + i) − 4(2zi − z 2 ) 1
= lim 5
=
2 z→i (z + i) 16i
and
00 00
z2 z2
1 3 1 3
res (f, −i) = lim (z + i) 2 = lim (z + i)
2! z→i (z + 1)3 2 z→i (z − i)3 (z + i)3
00 0
z2 −2zi − z 2
1 1
= lim = lim
2 z→i (z − i)3 2 z→i (z − i)4
1 (2i + 2z)(z − i) − 4(2zi + z 2 ) 1
= − lim 5
=− ,
2 z→i (z − i) 16i
z2
I
dz = 2πi [res (f, i) + res (f, −i)] = 0;
|z|=4 (z 2 + 1)3
x2
e) We have that x2 + 4y 2 = 4 ⇔ + y 2 = 1 is an ellipse of semi-axes
4
a = 2 and b = 1, centred at the origin. Let ∆ be the domain bounded by
this ellipse, which is a simple connected set.
We get that z = −1 and z = 0 are the isolated singular points of f , both
belonging to ∆.
It is obvious that z = −1 is a simple pole and that
1
e− z
res (f, −1) = = −e,
z z=−1
while z = 0 is an essential singular point (e∞ does not exist in C), hence
1
res (f, 0) = c−1 , the coefficient of from the Laurent series expansion of the
z
162
function f around 0. If |z| < 1, then
1
e− z 1
f (z) = ·
z z+1
1 1
1− + · · · + (−1)n n + · · ·
= 1!z n!z · (1 − z + · · · + (−1)n z n + · · · )
z
1 n 1
= − · · · + (−1) n+1
+ · · · · (1 − z + · · · + (−1)n z n + · · · ),
z n!z
1 1 1 1
hence c−1 = 1 + − + − + ···.
1! 2! 3! 4!
1 1 1 1
Since e = 1 − + − + + · · · , it follows that c−1 = 2 − e−1 . We
−1
1! 2! 3! 4!
have obtained that
1
e− z
I
dz = 2πi [res (f, −1) + res (f, 0)] = 2 − e − e−1 .
x2 +4y 2 =4 z2 + z
I
W 20. Calculate I = f (z)dz in the following cases:
Γ
z
a) f (z) = , Γ : |z| = 2;
z4 − 1
cos(2z) − 1
b) f (z) = , Γ : |z| = R > 0;
z2
1
c) f (z) = , Γ : |z − 2| = R > 0;
z3 +8
1
d) f (z) = , n ∈ Z, n ≥ 1, Γ : |z| = 2;
(z 2 + 1)n
1
e z2
e) f (z) = , Γ : |z| = R > 0;
z−1
z
f) f (z) = , Γ : |z| = 4.
sin2 z
163
Answer. a) We get that
I
f (z)dz = 2πi [res (f, 1) + res (f, −1) + res (f, i) + res (f, −i)] = 0;
Γ
164
2. if R = 1, then I = 2πi · res (f, 0) + πi · res (f, 1) = πi(2 − e);
f) We have that I = 2πi (res (f, −π) + res (f, 0) + res (f, π)) = 6πi.
z2 + 1
z2 + 1
I I
2z dz 1
I= 2 · =− dz.
z2 − 1 iz 2i |z|=1 (2z 2 + 5iz − 2)2
|z|=1
5+4
2iz
z2 + 1
Denoting by g(z) = , one finds that the function g has
(2z 2 + 5iz − 2)2
i
two isolated singular points, z = −2i and z = − (the solutions of the
2
165
equation 2z 2 + 5iz − 2 = 0). We only need to take into consideration the
i
points inside the disk D1 (0), so we choose z = − , which is a pole of order
2
2.
Since
" 2 #0
2
i 1 i z +1
res g, − = limi z+
2 1! z→− 2 2 (2z + 5iz − 2)2
2
0
2
i z2 + 1
= limi z +
2
2
z→− 2 i 2
4 z+ (z + 2i)
2
2 0
1 z +1 1 4iz − 2
= limi = lim = 0,
4 z→− 2 (z + 2i)2 4 z→− 2i (z + 2i)3
1 i
the integral is I = − · 2πi · res g, − = 0;
2i 2
x2 + 2
b) First of all, one notices that the function f (x) = is
x4 + 5x2 + 4
anZ even function (one verifies that f (−x) = f (x), ∀x ∈ R), hence I =
1 ∞ x2 + 2
dx.
2 −∞ x4 + 5x2 + 4
z2 + 2
The complex function f (z) = 4 has four isolated singular
z + 5z 2 + 4
points, z = ±i and z = ±2i (the solutions of the equation z 4 + 5z 2 + 4 =
0 ⇔ (z 2 + 1)(z 2 + 4) = 0). In the upper half-plane there are only z = i and
z = 2i, both simple poles.
Since
z 2 + 2 1
res (f, i) = 3 =
4z + 10z z=i 6i
and
z 2 + 2 1
res (f, 2i) = 3 = ,
4z + 10z z=2i 6i
1 π
it follows that I = · 2πi [res (f, i) + res (f, 2i)] = ;
2 3
Z ∞
xeiπx
c) Consider the integral I, where I = Im J and J = 2
dx.
−∞ 2x − 2x + 1
166
zeiπz
The complex function f (z) = has two isolated singular
2z 2 − 2z + 1
1±i
points, z = , the solutions of the equation 2z 2 − 2z + 1 = 0. Only
2
1+i
z= is in the upper half-plane and it is a simple pole.
2
Since
zeiπz
1+i 1 + i iπ−π
res f, = 1+i = e 2 2,
2 4z − 2 z= 2 4i
it follows that
1+i π π π
J = 2πi · res f, = (1 + i)ei 2 − 2 ,
2 2
hence
π π π
I = Im J = Im (1 + i)ei 2 − 2
2
π h π
π π i
= Im (1 + i)e− 2 cos + i sin
2 2 2
π − π2
π −π
= Im (i − 1)e = e 2;
2 2
Z ∞ Z ∞
x sin x cos x
d) Let us split the integral I = 2 2
dx+ dx. Since
0 (x + 1) 0 (x2 + 1)2
x sin x cos x
f1 (x) = 2 2
and f2 (x) = 2 are both even functions, we have
(x + 1) (x + 1)2
that Z ∞ Z ∞
1 x sin x cos x
I= 2 2
dx + 2 2
dx .
2 −∞ (x + 1) −∞ (x + 1)
Z ∞
xeix
Let us denote J1 = 2 2
dx. The complex function f1 (z) =
−∞ (x + 1)
zeiz
has into the upper half-plane only the isolated singular point z = i,
(z 2 + 1)2
which is a pole of order 2.
Since
0 0
zeiz zeiz
2
res (f1 , i) = lim (z − i) 2 = lim
z→i (z + 1)2 z→i (z + i)2
167
e−1 πie−1
it follows that J1 = 2πi · = and
4 2
∞
πe−1
Z
x sin x
dx = Im (J1 ) = .
−∞ (x2 + 1)2 2
∞
eix
Z
Similarly, taking J2 = dx, one obtains J2 = πe−1 and
−∞ (x2 + 1)2
Z ∞
cos x
dx = Re (J2 ) = πe−1 .
−∞ (x2 + 1)2
1 πe−1 3πe−1
−1
I= + πe = .
2 2 4
168
∞
x4 − x3 + 1
Z
h) I = dx;
−∞ 1 + x12
∞
x3 sin x
Z
i) I = dx;
0 (x2 + 4)2
Z ∞
(x + 1) cos(πx)
j) I = dx, a ∈ R, a 6= 0;
−∞ x 2 + a2
∞
cos2 x
Z
k) I = dx;
0 x2 + 1
Z ∞
x sin(ax) + cos(bx)
l) I = , a, b, c ∈ R?+ .
0 x 2 + c2
10π 2π 2π sgn(a)
Answer. a) I = π; b) I = ; c) I = ; d) I = √ ;
27 3 a2 − 1
e) We need to discuss the values of a. If |a| < 1, a 6= 0, then I = 2πan
2π
and for |a| > 1, one obtains I = − n . In the case a = 0, we have to discuss
a
the values of n. If n ≥ 1, the integral is null and for n = 0, one obtains
I = 2π;
√
√ 3π 2 2π π π
f) I = 2π; g) I = ; h) I = cos + sin ;
32 3 12 12
π
i) I = 0; j) I = e−|a|π ;
a
Z ∞ Z ∞ Z ∞
1 + cos(2x) 1 1 cos(2x)
k) I = 2 + 1)
dx = 2+1
+ 2+1
dx . Since
Z ∞ 0 2(x Z ∞ 2 0 x 0 x
1 ∞ cos(2x)
Z
1 π cos(2x)
= and dx = dx, we easily get that
0 x2 + 1 2 0 x2 + 1 2 −∞ x2 + 1
π
I = (1 + e−2 );
4
π −ac e−bc
l) I = e + .
2 c
W 22. Let be f and g two complex functions.
169
f (z) · e2/z
I
b) For g(z) = , compute res (g, 1), res (g, 0) and g(z)dz.
z2 − z |z|=2
W 23. Let be F : D = R × (− π2 , π2 ) → R,
Some topics which are not included in this book for lack of space can be
found in the Bibliography. For instance, limits and continuity of functions
of a complex variable, complex sequences and series or conformal mappings
are presented in [5], [20, Chapters 1 and 5], [18], [21], [24], [33], [35], [37].
170
Chapter 6
SOFTWARE
EA =
0.0067 0.0067
0 0.0067
171
Example 6.1.2. A = [-6 1;0 -6];
answer:
EA =
0.0025 0.0025
0 0.0025
Example 6.1.3. A = [-6.9 1;0 -6.9];
answer:
EA =
0.0010 0.0010
0 0.0010
Example 6.1.4. A = [-20 1;0 -20];
answer:
EA = 1.0e − 008∗
0.2061 0.2061
0 0.2061
Example 6.1.5. A = [-5 1;0 -5]; t = 0:1:10;
for i=1:11
EA=expm(A*t(i))
end
answer:
EA = 1.0e − 020∗
0.0193 0.0193
0 0.0193
Example 6.1.6. Solve, for t = 3, the initial value problem Y 0 (t) = AY (t),
Y (0) = Y0 ,
−2 6 1
A= , Y0 = .
3 −4 5
Solution. A = [-2 6;3 -4]; Y 0 = [1 5]0 ; Y 3 = expm(A∗ 3)∗ Y 0;
answer
239.0996
Y3= .
133.8519
Stability
The commands eig and lyap can be used to verify the stability of linear
continuous-time systems with constant coefficients Y 0 (x) = AY (x), x ∈ R,
where A is a real n × n matrix (see Theorem 1.3.2).
172
The syntax is one of the following:
1. Λ = eig(A);
2. [T, D] = eig(A);
173
Now we have to determine the eigenvalues and the eigenvectors of the
following drift matrices (for continuous-time systems).
Example 6.1.11. A = [-8 -6 6;6 2 -3;1 -1 0];
Solution. [T, D] = eig(A); dT = det(T )
Answer:
T= D=
0.4516 0.2673 −0.0000 −3.0000 0 0
−0.7903 −0.8018 0.7071 0 −2.0000 0
−0.4140 −0.5345 0.7071. 0 0 −1.0000
dT = −0.0142.
Conclusion: The eigenvalues −1, −2, −3 verify the condition Re λ < 0,
hence the system is asymptotically stable. The eigenvectors (the columns of
T ) are linearly independent (since det(T ) 6= 0). We can also observe that the
eigenvalues are distinct.
Example 6.1.12. A = [1 6 -10 -8;-1 -3 -3 5;1 2 -7 -1;0 1 -4 -2];
Solution. [T, D] = eig(A)
Answer:
T= D=
0.9049 0.4264 0.4264 −0.4264 −5 0 0 0
−0.0312 −0.8528 −0.8528 0.8528 0 −2 0 0
0.2496 −0.2132 −0.2132 0.2132 0 0 −2 0
0.3432 −0.2132 −0.2132 0.2132 0 0 0 −2
Conclusion: The eigenvalues −5, −2, −2, −2 verify the condition Reλ < 0,
hence the system is asymptotically stable.
The second column of T is an eigenvector v2 corresponding to the multiple
eigenvalue λ = 2 of A (with mg (λ) = 1 < 3 = ma (λ)). The third and the
fourth columns are respectively v2 and −v2 , hence the matrix T is singular.
This situation corresponds to the following Jordan canonical form of A
J=
−5 0 0 0
0 −2 1 0
0 0 −2 1
0 0 0 −2
where the second 3 × 3 Jordan cell is associated to the eigenvector v2 .
174
Example 6.1.13. A = [0 5 0 0;-5 0 0 0;0 0 0 -5;0 0 5 0];
Solution. [T, D] = eig(A); dT = det(T )
Answer:
0.0000 − 0.7071i 0.0000 + 0.7071i 0.0000 + 0.0000i 0.0000 + 0.0000i
0.7071 + 0.0000i 0.7071 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i
T = 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 − 0.7071i
0.0000 + 0.7071i
0.0000 + 0.0000i 0.0000 + 0.0000i −0.7071 + 0.0000i −0.7071 + 0.0000i
0.0000 + 5.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 − 5.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i
D=
0.0000 + 0.0000i
.
0.0000 + 0.0000i 0.0000 + 5.0000i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 − 5.0000i
dT = 1.
Conclusion: The eigenvalues λ1 = 5i and λ2 = −5i have algebraic multi-
plicities ma (λi ) = 2, i ∈ {1, 2}. The corresponding eigenvectors are linearly
independent, therefore mg (λi ) = 2 = ma (λi ), ∀i ∈ {1, 2}. The system is
stable, but not asymptotically stable since Re (λ1,2 ) = 0.
175
Example 6.1.15. A = [6.0000 2.0000;-16.0000 -5.0000]; Q = [1 -1;-1 2];
P = lyap(A, Q)
eig(P )
The answer is the following:
−3.7500 11.0000
P =
11.0000 −35.0000
and the eigenvalues are −38.4837 and −0.2663.
The eigenvalues of the matrix P are negative, hence the matrix P is
negative definite. It follows that A is an unstable matrix.
Now we will solve the Lyapunov equation with formula 1.41. We will
use the command kron which has the syntax K = kron(A, B). Action:
if A = [aij ]1≤i≤m;1≤j≤n is an m × n matrix and B is p × q matrix, then
K = kron(A, B) returns the Kronecker product of the matrices A and B,
i.e. the mp × nq matrix K = A ⊗ B = [aij B]1≤i≤m;1≤j≤n .
Example 6.1.16. A = [-1 2;3 -4]; B = [1 0 1;2 5 -2];
K = kron(A, B)
The answer is the following:
−1 0 −1 2 0 2
−2 −5 2 4 10 −4
K= .
3 0 3 −4 0 −4
6 15 −6 −8 −20 8
0 1 1 −6
176
4 3.3333
1 1.0000 3.3333 1.0000
q= , p =
1
, P = ,
1.0000 1.0000 0.6667
2 0.6667
D1 = 3.3333 > 0, D2 = 1.2222 > 0. The matrix P is positive definite, hence
the system is asymptotically stable.
177
Discrete-time Lyapunov equation
The command which solves the discrete-time Lyapunov equation
AP A0 − P = −Q
178
Functions
Let z be a complex number and Z a complex array.
The function real, which is used as x = real(z) or X = real(Z) returns
the real part of z or the real part of the elements of Z.
The function conj, which is used as Zbar = conj(Z) returns the complex
conjugate of the elements of Z.
179
Taylor Series Expansion
The command is taylor and the syntax is the following:
sym z;
taylor(f (z), z, ’ExpansionPoint’, a, ’Order’, n),
where n is the truncation order specified by the name-value pair argument
Order, a is the expansion point specified by the name-value pair argument
ExpansionPoint.
If we write taylor(f (z), z, a) it means that the default truncation order
is n = 0 and if we omit also a, so we write taylor(f (z), z), then the default
expansion point is a = 0.
180
Example 6.2.13. F (z) = taylor(sin(z), z): F (z) = zˆ9/362880−zˆ7/5040+
zˆ5/120 − zˆ3/6 + z.
6.2.2 MAPLE
The basic imaginary unit is represented in MAPLE by the capital letter
I.
Functions
The function Complex is similar to the function complex in MATLAB.
Example 6.2.18. Matrix ([[1, −2], [0, 3]]) + Matrix ([[Pi, 12], [5, 0]]) · I:
1 + Iπ −2 + 12 I
.
5I 3
The functions Re, Im, abs, argument, conjugate are similar to the
functions real, imag, abs, angle, conj in MATLAB.
181
√
Example 6.2.21. abs(1 − I · sqrt(Pi)): 1 + π.
√
Example 6.2.22. argument(1 − I · sqrt(Pi)): − arctan( π).
1
Example 6.2.23. argument(1 − I · sqrt(3)): − π.
3
√
Example 6.2.24. conjugate(1 − I · sqrt(Pi)): 1 + I π.
182
expression, and furnishes polar(r, t), where r and t are real expressions. So,
polar converts the complex-valued expression z to its representation in polar
coordinates. The result is represented as polar(r, t), where r is the modulus
of z and t is the argument of z.
The point at infinity (complex infinity, denoted in MAPLE by cs-infinity)
has the algebraic form infinity + I · infinity = ∞ + ∞ I. and the canonical
(polar) form ∞eundefined I (i.e. r = ∞ and t = undefined).
√
1
Example 6.2.31. z := sqrt(3)−I: 3 − I; polar(z): polar 2, − π ;
6
183
be reduced to the region 0 ≤ Re z ≤ a, 0 ≤ Im z ≤ b. With option ’modulo s’
a a b b
they will be reduced to the region − ≤ Re z ≤ , ≤ Im z ≤ .
2 2 2 2
Example 6.2.32. Load the package RootFindind.
with(RootFinding)
f := z 4 + 2 · z 3 + z 2 − 2 · z − 2
z 4 + 2z 3 + z 2 − 2z − 2
Analytic(f, z, −1 − I..1 + I)
−1.00000000000000, −1.00000000000000 − 1.00000000000000 I,
−1.00000000000000 + 1.00000000000000 I
Example 6.2.33. f := z 2 − 2 · z − 2 · I + 1
z 2 − 2z + 1 − 2 I
zeroes := Analytic(f, z, −1.5 − 1.5 · I..2 + 1.5 · I)
−1. I, 2.00000000000000 + 1.00000000000000 I
plots[complexplot](zeros, style=”point”, axes=boxed )
Example 6.2.34. Determine a polynomial with given roots and verify the
result.
Pi
zeros := 1 − I, 2 + 3 · I, , I · Pi :
2
f := mul (z − z0, z0 = zeros)
1
(z − 1 + I)(z − 2 − 3 I) z − π (z − Iπ)
2
Analytic(f, 3 · (−I)..4 + 4 · I)
1.57079632679490, 3.14159265358983I, 1.00000000000000−1.00000000000000I
2.00000000000000 + 3.00000000000000 I
184
Definite and Indefinite Integration
The command int performs definite and indefinite integration. The syn-
tax is the following: R
int(expression, x) expression dx
Rb
int(expression, x = a..b) a
expression dx
int(expression, [x, y])
int(expression, x = a..b, opt)
int(expression, [x = a..b, y = c..d, . . .], opt)
int(f, a..b, opt)
int(f, [a..b, c..d, . . .], opt)
The above parameters mean the following:
If a and b are complex numbers, then the int routine computes the definite
integral over the straight line from a to b.
1 2z
e
2
Example 6.2.36. int(exp((2 − 3 · I) · z), z)
2 3
+ I e(2−3 I)z
13 13
sin((π − I)z)
π−I
65
4
185
Example 6.2.39. int(x3 , x = 2 + I..3 + 4 · I)
−130 − 90 I
186
FunctionAdvisor/branch cuts returns the branch cuts of a function.
The syntax is the following:
FunctionAdvisor(branch cuts, math function name)
FunctionAdvisor(branch cuts, math function name); returns the
branch cuts of the function, if any, or the string ”No branch cuts”. If the
requested information is not available, the FunctionAdvisor command re-
turns NULL.
FunctionAdvisor/branch points returns the branch points of a func-
tion. The syntax is the following:
FunctionAdvisor(branch points, math function)
FunctionAdvisor(branch points, math function); it will return the
branch points of the function, if any, or the string ”No branch points”. If
the requested information is not available, the FunctionAdvisor command
returns NULL.
[arcsin(z), z ≤ −1, 1 ≤ z]
∞ I]
187
The branch cuts are sometimes compactly expressed in terms of Real-
Range or ComplexRange; in these cases converting the result to relation
may be of help to figure out where the cuts are.
[ez , z = ∞ + ∞ I]
But the answer is wrong for composite functions, as we can see from the
following.
1
Example 6.2.50. FunctionAdvisor singularities, exp
(z − 1)
1 1
sin , =∞+∞ I
z−1 z−1
188
Taylor and Laurent Series Expansion
For the Taylor series expansion we use the command taylor. The syntax
is the following:
taylor(expression, x = a, n)
The taylor command computes the order n Taylor series expansion of
expression, with respect to the variable x/z , about the point a.
taylor (exp(z), z = 1 + I, 4)
1 1
e1+I + e1+I (z − 1 − I) + e1+I (z − 1 − I)2 + e1+I (z − 1
2 6
3 4
−I) + O (z − 1 − I)
1
Example 6.2.53. taylor exp , z = 1, 5
z
3 13 73
e − e(z − 1) + e(z − 1)2 − e(z − 1)3 + e(z − 1)4
2 6 24
+O (z − 1)5
Example 6.2.54. The following answer is provided when the function is not
analytic ata.
1
taylor exp , z = 0, 5
z
Error, (in series/exp) unable to compute series
189
Example 6.2.55. taylor (sin(z), z = 0, 10)
1 1 5 1 7 1
z − z3 + z 9 + O z 10
z − z +
6 120 5040 362880
Example 6.2.56. taylor (cos(z), z = 0, 10)
1 1 1 6 1
1 − z2 + z4 − z 8 + O z 10
z +
2 24 720 40320
1
Example 6.2.57. taylor , z = 0, 5
(1 − z)
1 + z + z 2 + z 3 + z 4 + O(z 5 )
1
Example 6.2.58. taylor , z = 1 − I, 5
(1 − z)
+O (z − 1 + I)5
1
Example 6.2.59. taylor , z = 0, 5
(1 − z)2
1 + 2z + 3z 2 + 4z 3 + 5z 4 + O z 5
190
The function laurent computes, within the package numapprox, the
Laurent series expansion of the function f , with respect to the variable x/z,
about the point a, up to order n (optional).
If the result of the series function applied to the specified arguments is
a Laurent series with finite principal part (i.e. only a finite number of non-
negative powers appear in the series), then this result is returned; otherwise,
an error-return occurs.
Example 6.2.62. with(numapprox ):
1 1
f := z −→ z −→
(1 − z) 1−z
1 + z + z2 + z3 + z4 + z5 + O z6
laurent(f (z), z = 0)
1 1 7 2
z −2 + + z + O z4
Example 6.2.63. laurent ,z = 0
z · sin(z) 6 360
1
Example 6.2.64. laurent ,z = 1
z · cos(z − 1)
3 3 41 41
1 − (z − 1) + (z − 1)2 − (z − 1)3 + (z − 1)4 − (z
2 2 24 24
−1)5 + O (z − 1)6
!
1
Example 6.2.65. laurent , z = 1, 8
(z − 1)3 sin(z − 1)
1 7 31 127
(z − 1)−4 + (z − 1)−2 + + (z − 1)2 + (z
6 360 15120 604800
−1)4 + O (z − 1)5
1
Example 6.2.66. laurent 1 + , z = 1, 4 (z−1)−1 +1
(z − 1)
(z 3 + 4 · z 4 − z + 3)
Example 6.2.67. laurent , z = 1, 3
((z − 1)3 · (z + 2) · (z − 2))
7 68 400 1463
− (z − 1)−3 − (z − 1)−2 − (z − 1)−1 −
3 9 27 81
4450 13289
(z − 1)2 + O (z − 1)3
− (z − 1) −
243 729
191
(z 3 + 4 · z 4 − z + 3)
f := laurent , z = −2, 4
((z − 1)3 · (z + 2) · (z − 2))
61 163 167 7711
(z + 2)−1 − + (z + 2) + (z + 2)2
8 432 5184 186624
+O (z + 2)3
Computation of Residues
The command residue computes the algebraic residue of a function f at
an isolated singular point a. The syntax is residue(f, z = a).
(z + 1) z+1
Example 6.2.68. f := z −→ z −→
z z
residue (f (z), z = 0) 1
residue (f (z), z = infinity) −1
We have that 1 + (−1) = 0.
(4 · z 4 + z 3 − z + 3)
400
Example 6.2.69. residue , z = 1 −
((z − 1)3 · (z +2) · (z − 2)) 27
4 3
(4 · z + z − z + 3) 73
residue 3
,z = 2
((z − 1) 4· (z + 2) · (z − 2)) 4
(4 · z + z 3 − z + 3)
61
residue 3 · (z + 2) · (z − 2))
, z = −2
((z − 1) 108
4 3
(4 · z + z − z + 3)
residue , z = infinity −4
((z − 1)3 · (z + 2) · (z − 2))
400 73 61
We have that + + − 4 = 0.
27 4 108
Pi
Example 6.2.70. residue tan(z), z = −1
2
sin(z)
Example 6.2.71. residue ,z = 0 0 (z = 0 is a removable
z
singular point)
sin(z)
Example 6.2.72. residue ,z = 0 1
z2
sin(z) 1
Example 6.2.73. residue 4
,z = 0 −
z 6
192
Bibliography
[3] Balan, V., Pı̂rvan, M.: Matematici avansate pentru ingineri - Pro-
bleme date la Concursul Ştiinţific Studenţesc ”Traian Lalescu”, Mate-
matică, anii 2002-1014, Ed. Politehnica Press, Bucureşti, 2014.
[4] Breaz, N., Crăciun, M., Gaşpar, P., Miroiu, M., Paraschiv-
Munteanu, I.: Modelarea matematică prin Matlab, Ed. StudIS, Iaşi,
2013.
[5] Breaz, D., Suciu, N., Gaşpar, P., Barbu, G., Pı̂rvan, M.,
Prepeliţă, V., Breaz, N.: Transformări integrale şi funcţii com-
plexe cu aplicaţii ı̂n tehnică, Vol. 1 - Funcţii complexe cu aplicaţii ı̂n
tehnică, Ed. StudIS, Iaşi, 2013.
193
[10] Halanay Andrei, Mateescu, M.: Elemente din teoria ecuaţiilor
diferenţiale şi a ecuaţiilor integrale, Ed. Matrix Rom, Bucureşti, 2010.
[14] Kecs, W. W., Toma, A.: Calcul diferenţial şi integral, Editura
Edyro Press, Petroşani, 2010.
194
[22] Muir, T.: A Treatise on the Theorie of Determinants, Macmillan,
London, 1882.
[28] Poincaré, H.: Mémoire sur les courbes dŕfinies par une équation
différentielle (I), J. Math. Pures Appl. 7 (1881), 375-422.
[29] Poincaré, H.: Mémoire sur les courbes dŕfinies par une équation
différentielle (II), J. Math. Pures Appl. 8 (1882), 251-296.
195
[35] Stein, E., Shakarchi, R.: Princeton Lectures in Analysis. II Com-
plex Analysis, Princeton University Press, Princeton and Oxford, 2003.
196
Index
197
extended complex plane, 67 removable, 129
exterior point, 67
Jordan cell, 7, 8, 22, 25, 46, 54, 173,
Fréchet differentiable function, 84 174
fundamental Jordan’s lemmas, 146
matrix, 13, 14, 18, 20, 21, 23
Kronecker product, 31, 176
system of solutions, 6, 8, 9, 11,
19, 36, 38, 39, 41, 43 lacuna, 69
Laurent series, 128, 133, 136, 190
general response, 23 limit of a complex function, 70
generalized eigenvector, 7, 8, 11, 12 linear and homogenous SDEs, 8
geometric series, 121 linear and homogenous SDEs with con-
Green-Riemann formula, 112 stant coefficients, 6, 171, 172
linear complex function, 70
harmonic function, 87
linear control system, 23
Hermitian matrix, 28
linear SDEs, 2
homogenous SDEs, 2, 4
linear time-invariant system, 23, 177
homogenous system, 5
linearization, 32
Hurwitz matrix, 27, 53
Liouville’s formula, 6
Hurwitz polynomial, 27, 28, 53
logarithmic branch point, 76
imaginary axis, 65 logarithmic complex function, 75
imaginary part of a complex function, Lyapunov equation, 29, 31, 53–55, 175,
70, 77, 95, 97 176
imaginary part of a complex number, Lyapunov function, 30, 55, 56
65, 77, 94, 179 Lyapunov function (for a linearized
indefinitely increasing sequence, 66 system), 33
initial condidition of a SDEs, 3
method of undetermined coefficients,
initial value problem, 3
22
input of a system, 23
multiplicity
input-output map, 23
algebraic, 7, 9, 12, 44, 173, 175
integral of a function over a curve,
geometric, 7, 37, 44, 173
108
interior of a set, 67 n-uply connected domain, 68
interior point, 67 neighborhood of a point, 66
inverted complex function, 71 neighborhood of the point at infinity,
isolated singular point, 125 67
essential, 130 non-diagonalizable matrix, 10, 40, 44,
pole of order k, 129 45, 47
198
nonlinear time invariant system, 31 stable matrix, 27, 29, 57, 175, 178
stable nonlinear time invariant sys-
open disk, 66 tem, 31
open set, 67 stable SDEs, 24, 25
ordinary point, 122 state equation, 23
output equation, 23 state of a system, 23
output of a system, 23
Taylor series, 122, 125, 189
partition, 108 the radical complex function, 72
path-wise connected set, 68 Theorem
Peano-Baker formula, 14, 17 Existence and Uniqueness of so-
point at infinity, 66 lutions of an SDEs, 3
polar coordinates, 65 of Analytic Continuation, 88
positive definite functional, 33 of Weierstrass, 122
positive definite matrix, 28 on Instability, 33
positive definite quadratic form, 30 on Local Asymptotic Stability, 33
principal branch, 74 on Local Stability, 33
principal vector, 8, 40–42, 49 trigonometric complex functions, 88
trigonometric form of a complex num-
quasipolynomials, 22 ber, 65
triply connected domain, 68
real axis, 65
real part of a complex function, 70, unstable nonlinear time invariant sys-
78 tem, 31
real part of a complex number, 65, 76, unstable SDEs, 24, 25
94, 179
residue at the point at infinity, 143 variation of parameters, 19, 21
residue of a function at a point, 141, variation of parameters formula, 20,
155, 157, 192 22, 23, 47
Residue theorem, 142
Weierstrass criterion, 16
Routh-Hurwitz criterion, 28, 53
Wronskian, 5, 6, 9
semigroup property, 14
simple pole, 129
simply connected set, 68
single-valued complex function, 69
singularity, 125
solution of a SDEs, 2, 7
spectrum, 7, 10
199