Anda di halaman 1dari 210

STRUCTURAL DYNAMICS, VOL.

Computational Dynamics

Søren R. K. Nielsen

P (λ)
  λ − µk
y(λ) = P (µk ) + P (µk ) − P (µk−1)
µk − µk−1

λ1 λ2 λ3
λ
µk−1 µk
µk+1

Aalborg tekniske Universitetsforlag


June 2005
Contents

1 INTRODUCTION 7
1.1 Fundamentals of Linear Structural Dynamics . . . . . . . . . . . . . . . . . . 7
1.2 Solution of Initial Value Problem by Modal Decomposition Techniques . . . . 25
1.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2 NUMERICAL INTEGRATION OF EQUATIONS OF MOTION 31


2.1 Newmark Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.1 Numerical accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1.2 Numerical stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.1.3 Period Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.1.4 Numerical Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2 Generalized Alpha Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3 LINEAR EIGENVALUE PROBLEMS 53


3.1 Gauss Factorization of Characteristic Polynomials . . . . . . . . . . . . . . . . 53
3.2 Eigenvalue Separation Principle . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4 Transformation of GEVP to SEVP . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4 APPROXIMATE SOLUTION METHODS 69


4.1 Static Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Rayleigh-Ritz Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5 VECTOR ITERATION METHODS 89


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2 Inverse and Forward Vector Iteration . . . . . . . . . . . . . . . . . . . . . . . 90
5.3 Shift in Vector Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Inverse Vector Iteration with Rayleigh Quotient Shift . . . . . . . . . . . . . . 106
5.5 Vector Iteration with Gram-Schmidt Orthogonalization . . . . . . . . . . . . . 109
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

— 3 —
4 Contents

6 SIMILARITY TRANSFORMATION METHODS 115


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2 Special Jacobi Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3 General Jacobi Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.4 Householder Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.5 QR Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

7 SOLUTION OF LARGE EIGENVALUE PROBLEMS 147


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.2 Simultaneous Inverse Vector Iteration . . . . . . . . . . . . . . . . . . . . . . 149
7.3 Subspace Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.4 Characteristic Polynomial Iteration . . . . . . . . . . . . . . . . . . . . . . . . 164
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

8 INDEX 172

A Solutions to Exercises 175


A.1 Exercise 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
A.2 Exercise 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
A.3 Exercise 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
A.4 Exercise 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
A.5 Exercise 3.4: Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
A.6 Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
A.7 Exercise 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
A.8 Exercise 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
A.9 Exercise 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.10 Exercise 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
A.11 Exercise 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.12 Exercise 6.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
A.13 Exercise 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
A.14 Exercise 7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
A.15 Exercise 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Preface

This book has been prepared for the course on Computational Mechanics given at the 8th
semester at the structural engineering program in civil engineering at Aalborg University. The
course presumes undergraduate knowledge of linear algebra and ordinary differential equations,
as well as a basic graduate course in structural dynamics. Some of these prerequisites have been
reviewed in an introductory chapter. The author wants to thank Jesper W. Larsen, Ph.D., and
Ph.D. student Kristian Holm-Jørgensen for help with the preparation of figures and illustrations
throughout the text.

Answers to all exercises given at the end of each chapter can be downloaded from the home
page of the course at the address: www.civil.auc.dk/i5/engelsk/dyn/index/htm

Aalborg University, June 2005


Søren R.K. Nielsen

— 5 —
6 Contents
C HAPTER 1
INTRODUCTION
In this chapter the basic results in structural dynamics and linear algebra have been reviewed.

In Section 2.1 the relevant initial and eigenvalue problems in structural dynamics are formu-
lated. The initial value problems form the background for the numerical integration algorithms
described in Chapter 2, whereas the related undamped generalized eigenvalue problem consti-
tute the generic problem for the numerical eigenvalue solvers described in Chapters 3-7. Formal
solutions to various formulations of the initial value problem are indicated, and their shortcom-
ings in practical applications are emphasized.

In Section 2.2 the semi-analytical solution approaches to the basic initial value problem of a
multi degrees-of-freedom system in terms of expansion in various modal bases are presented.
The application of these methods in relation to various reduction schemes, where typically
merely the low-frequency modes are required, has been outlined.

1.1 Fundamentals of Linear Structural Dynamics


The basic equation of motion for forced vibrations of a linear viscous damped n degree-of-
freedom system reads1

Mẍ(t) + Cẋ(t) + Kx(t) = f(t) , t > t0
(1–1)
x(t0 ) = x0 , ẋ(t0 ) = ẋ0
x(t) is the vector of displacements from the static equilibrium state, ẋ(t) is the velocity vector,
ẍ(t) is the acceleration vector, and f(t) is the dynamic load vector. x 0 and x0 denote the initial
value vectors for the displacement and velocity, respectively. K, M and C indicate the stiffness
matrix, mass matrix and damping matrices, all of the dimension n × n. For any vector a = 0
these fulfill the following positive definite and symmetry properties


aT Ka > 0 , K = KT ⎪⎪

aT Ma > 0 , M=M T
(1–2)



aT Ca > 0
1
S.R.K. Nielsen: Structural Dynamics, Vol. 1. Linear Structural Dynamics, 4th Ed.. Aalborg tekniske Univer-
sitetsforlag, 2004.

— 7 —
8 Chapter 1 – INTRODUCTION

If the structural system is not supported against stiff-body motions, the stiffness matrix is merely
positive semi-definite, so aT Ka ≥ 0. Correspondingly, if some degrees of freedom are not
carrying kinetic energy (pseudo degrees of freedom with zero mass or zero mass moment of
inertia), the mass matrix is merely positive semi-definite, so a T Ma ≥ 0. The positive definite
property of the damping matrix is a formal statement of the physical property that any non-zero
velocity of the system should be related with energy dissipation. However, C needs not fulfill
any symmetry properties, although energy dissipation is confined to the symmetric part of the
matrix. So-called aerodynamic damping loads are external dynamic loads proportional to the
structural velocity, i.e. f(t) = −Ca ẋ(t). If the aerodynamic damping matrix C a is absorbed in
the total damping matrix C, no definite property can be stated.

The solution of the initial value problem (1-1) can be written in the following way 1

 ⎫
t

x(t) = h(t − τ )f(τ )dτ + a0 (t − t0 )x0 + a1 (t − t0 )ẋ0 ⎪


t0


a0 (t) = h(t)C + ḣ(t)M (1–3)






a1 (t) = h(t)M

h(t) is the impulse response matrix . Formally, this matrix is obtained as a solution to the initial
value problem


Mḧ(t) + Cḣ(t) + Kh(t) = I δ(t)
(1–4)
h(0− ) = 0 , ḣ(0− ) = 0

I is the unit matrix of the dimension n × n, and δ(t) is Dirac’s delta function.

The frequency response matrix H(iω) related to the system (1-1) is given as


−1
H(iω) = iω) M + (iω)C + K
2
(1–5)

where i = −1 is the complex unit. The impulse response matrix is related to the frequency
response matrix in terms of the Fourier transform

 ∞
1
h(t) = H(iω)eiωt dt (1–6)
2π −∞

The convolution quadrature in (1-3) is relative easily evaluated numerically. Hence, the solution
of (1-1) is available, if the impulse response matrix h(t) is known. In turn, the n×n components
of this matrix can be calculated by the Fourier transforms (1-6). Although these transforms may
be evaluated numerically, the necessary calculation efforts become excessive even for a moder-
ate number of degrees of freedom n. Hence, more direct analytical or numerical approaches are
1.1 Fundamentals of Linear Structural Dynamics 9

mandatory.
 
Undamped eigenvibrations C = 0, f(t) ≡ 0 are obtained as linear independent solutions to
the homogeneous matrix differential equation

Mẍ(t) + Kx(t) = 0 (1–7)

Solutions are searched for on the form

x(t) = Φ(j) eiωj t (1–8)

Insertion of (1-8) into (1-7) provides the following homogeneous system of linear equations for
the determination of the amplitude Φ (j) and the unknown constant ωj

K − λj M Φ(j) = 0 , λj = ωj2 (1–9)

(1-9) is a so-called generalized eigenvalue problem (GEVP). If M = I, the eigenvalue problem


is referred to as a special eigenvalue problem (SEVP).

The necessary condition for non-trivial solutions (i.e. Φ (j) = 0) is that the determinant of the
coefficient matrix is different from zero. This lead to the characteristic equation

P (λ) = det K − λM = 0 (1–10)

P (λ) indicates the characteristic polynomial. This may be expanded as

P (λ) = a0 λn + a1 λn−1 + · · · + an−1 λ + an (1–11)

The constants a0 , a1 , . . . , an are known as the invariants of the GEVP. This designation stems
from the fact that the characteristic polynomial (1-11) is invariant under any rotation of the co-
ordinate system. Obviously, a 0 = (−1)n det(M), and an = det(K). The nth order equation
(1-10) determines n solutions, λ 1 , λ2 , . . . , λn .

Assume that either M or K are positive definite. Then, all eigenvalues λ j are non-negative real,
which may be ordered in ascending magnitude as follows 1

0 ≤ λ1 ≤ λ2 ≤ · · · ≤ λn−1 ≤ λn ≤ ∞ (1–12)

λn = ∞, if det(M) = 0. Similarly, λ1 = 0, if det(K) = 0. The eigenvalues are denotes as


simple, if λ1 < λ2 < · · · < λn−1 < λn . The undamped circular eigenfrequencies are related to
the eigenvalues as follows
10 Chapter 1 – INTRODUCTION


ωj = λj (1–13)
The corresponding solutions for the amplitude functions, Φ (1) , . . . ,Φ(n) , are denoted the un-
damped eigenmodes of the system, which are real as well.

The eigenvalue problems (1-9) can be assembled into following matrix formulation
⎡ ⎤
λ1 0 ··· 0

⎢ 0 λ2 ··· 0⎥ ⎥
K Φ(1) Φ(2) · · · Φ(n) = M Φ(1) Φ(2) · · · Φ(n) ⎢ . .. .. .. ⎥ ⇒
⎣ .. . . .⎦
0 0 · · · λn
KΦ = MΦΛ (1–14)
where

⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· 0⎥
⎢ 2 ⎥
Λ=⎢. .. .. .. ⎥ (1–15)
⎣ .. . . .⎦
0 0 · · · λn
and Φ is the so-called modal matrix of dimension n × n, defined as


Φ = Φ(1) Φ(2) · · · Φ(n) (1–16)
If the eigenvalues are simple, the eigenmodes fulfill the following orthogonality properties 1

0 , i = j
Φ(i) T MΦ(j) = (1–17)
Mi , i=j

(i) T 0 , i = j
Φ KΦ(j)
= (1–18)
ωi2 Mi , i=j
where Mi denotes the modal mass.

The orthogonality properties (1-17) can be assembled in the following matrix equation
⎡ ⎤
M1 0 ··· 0
(1) (2) T ⎢
⎢ 0 M2 ··· ⎥
0 ⎥
Φ Φ · · · Φ(n) M Φ(1) Φ(2) · · · Φ(n) = ⎢ . .. .. .. ⎥ ⇒
⎣ .. . . . ⎦
0 0 · · · Mn

ΦT MΦ = m (1–19)
1.1 Fundamentals of Linear Structural Dynamics 11

where

⎡ ⎤
M1 0 ··· 0
⎢ 0 M ··· ⎥
⎢ 2 0 ⎥
m=⎢ . . .. .. ⎥ (1–20)
⎣ .. .. . . ⎦
0 0 · · · Mn
The corresponding grouping of the orthogonality properties (1-18) reads

ΦT KΦ = k (1–21)

where

⎡ ⎤
ω12 M1 0 ··· 0
⎢ 0 2
··· ⎥
⎢ ω2 M2 0 ⎥
k=⎢ . .. .. .. ⎥ (1–22)
⎣ .. . . . ⎦
0 0 · · · ωn2 Mn
If the eigenvalues are all simple, the eigenmodes become linear independent, which means that
the inverse Φ−1 exists.

In the following it is generally assumed that the eigenmodes are normalized to unit modal mass,
so m = I. For the special eigenvalue problem, where M = I, it then follows from (1-19) that

Φ−1 = ΦT (1–23)

A matrix fulfilling (1-23) is known as orthonormal or unitary, and specifies a rotation of the
coordinate system. All column and row vectors have the length 1, and are mutually orthogonal.
It follows from (1-19) and (1-21) that in case of simple eigenvalues a so-called similarity trans-
formation exists, defined by the modal matrix Φ, that reduce the mass and stiffness matrices to
a diagonal form. In case of multiple eigenvalues the problem becomes considerable more com-
plicated. For the standard eigenvalue problem with multiple eigenvalues it can be shown that
the stiffness matrix merely reduces to the so-called Jordan normal form under the considered
similarity transformation, given as follows

⎡ ⎤
k1 0 ··· 0
⎢0 k ··· 0 ⎥
⎢ 2 ⎥
k=⎢ . .. .. .. ⎥ (1–24)
⎣ .. . . . ⎦
0 0 · · · km
where m ≤ n denotes the number of different eigenvalues, and ki signifies the so-called Jordan
boxes, which are block matrices of the form
12 Chapter 1 – INTRODUCTION

⎡ ⎤
⎡ ⎤ ωi2 1 0 0
  ωi2 1 0 ⎢ ⎥
ω 2
i 1 ⎢ ⎥ ⎢ 0 ωi2 1 0 ⎥
ωi2 , , ⎣ 0 ωi2 1⎦ , ⎢ ⎥ , ... (1–25)
0 ωi2 2
⎣ 0 0 ωi2 1 ⎦
0 0 ωi
0 0 0 ωi2

Assume that the mass matrix is non-singular so M −1 exists. Then, the equations of motion
(1-1) may be reformulated in the following state vector form of coupled 1st order differential
equations2


ż(t) = Az(t) + F(t) , t > t0
(1–26)
z(t0 ) = z0

       
x(t) x0 0 I 0
z(t) = , z0 = , A= , F(t) = (1–27)
ẋ(t) ẋ0 −M−1 K −M−1 C M−1 f(t)

z(t) denotes the state vector. The corresponding homogeneous differential system reads

ż(t) = Az(t) (1–28)

The solution of (1-26) becomes2

  t 
At −At0 −Aτ
z(t) = e e z0 + e F(τ )dτ (1–29)
t0

The n × n matrix eAt is denoted the matrix exponential function. This forms a fundamental
matrix to (1-28), i.e. the columns of e At form 2n linearly independent solutions to (1-28).
Actually, eAt is the fundamental matrix fulfilling the matrix initial value problem


d At
e = AeAt , t>0⎬
dt (1–30)
A·0 ⎭
e =I
 −1
where I denotes a 2n × 2n unit matrix. Now, eAt = e−At as shown in Box 1.1. Using
this relation for t = t0 , (1-29) is seen to fulfil the initial value of (1-26). Since conventional
differentiation rules also applies to matrix products, the fulfilment of the differential equation
2
D.G. Zill and M.R. Cullen: Differential Equations with Boundary-Value Problems, 5th Ed. Brooks/Cole,
2001.
1.1 Fundamentals of Linear Structural Dynamics 13

in (1-26) follows from differentiation of the right hand side of (1-29), and application of (1-30),
i.e.

  t 

d d At −At0 −Aτ
z(t) = e e z0 + e F(τ )dτ + eAt 0 + e−At F(t) =
dt dt t0
  t 
At −At0 −Aτ
Ae e z0 + e F(τ )dτ + I F(t) = Az(t) + F(t) (1–31)
t0

The solution to (1-30) can be represented by the following infinite series of matrix products

t2 2 t3 3
eAt = I + tA + A + A +··· (1–32)
2! 3!
where A2 = AA, A3 = AAA etc. (1-32) is seen to fulfil the initial value eA·0 = I. The
fulfilment of the matrix differential equation (1-30) follows from termwise differentiation of
the right-hand side of (1-32)

d At t 2 t2 3 t2 2

e = 0 + A + A + A + · · · = 0 + A I + tA + A + · · · = AeAt (1–33)
dt 1! 2! 2!
The right-hand side of (1-32) converges for arbitrary values of t as the number of terms in-
creases beyond limits on the right-hand side. Hence, e At can in principle be calculated using
this representation. However, for large values of t the convergence is very slow. In (1-29) the
fundamental matrix eAt is needed for arbitrary positive and negative values of t. Hence, the
use of (1-32) as an algorithm for eAt in the solution (1-29) becomes increasingly computational
expensive as the integration time interval is increased. In Box 1.1 an analytical solution for e At
has been indicated, which to some extent circumvents this problem. However, this approach
requires that all eigenvectors and eigenvalues of A are available.

Damped eigenvibrations are obtained as linear independent solutions to the homogeneous dif-
ferential equation (1-28). Analog to (1-8) solutions are searched for on the form

z(t) = Ψ(j) eλj t (1–34)

Insertion of (1-34) into (1-28) provides the following special eigenvalue problem of the dimen-
sion 2n for the determination of the damped eigenmodes Ψ (j) and the damped eigenvalues λj

A − λj I Ψ(j) = 0 (1–35)

Since A is not symmetric, λj and Ψ(j) are generally complex. Upon complex conjugation of
(1-35), it is seen that if (λ, Ψ) denotes an eigen-pair (solution) to (1-35), then (λ ∗ , Ψ∗ ) is also an
eigen-pair, where * denotes complex conjugation. For lightly damped structures all eigenvalues
are complex. In this case only n eigen-pairs (λj , Ψ(j) ), j = 1, 2, . . . , n need to be considered,
where no eigen-pair is a complex conjugate of another in the set.
14 Chapter 1 – INTRODUCTION

Let the first n components of Ψ(j) be assembled in the n-dimensional sub-vector Φ (j) . Then,
from (1-27) and (1-34) it follows that

x(t) = Φ(j) eλj t ⇒ ẋ(t) = λj Φ(j) eλj t (1–36)

Consequently, the damped eigenmodes must have the structure


 
Φ (j)
Ψ(j) = (1–37)
λj Φ(j)

Hence, merely the first n components of Ψ(j) need to be determined.

The eigenvalue problems (1-35) can be assembled into the following matrix formulation, cf.
(1-14)-(1-16)
⎡ ⎤
λ1 0 ··· 0

⎢ 0 λ2 · · · 0 ⎥ ⎥
A Ψ(1) Ψ(2) · · · Ψ(2n) = Ψ(1) Ψ(2) · · · Ψ(2n) ⎢ . .. . . .. ⎥ ⇒
⎣ .. . . . ⎦
0 0 · · · λ2n
AΨ = ΨΛA (1–38)

where
⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
ΛA = ⎢ . . .. .. ⎥ (1–39)
⎣.. .
. . . ⎦
0 0 · · · λ2n

Ψ = Ψ(1) Ψ(2) · · · Ψ(2n) (1–40)

The following representation of A in terms of the damped eigenmodes and eigenvalues follows
from (1-38)

A = ΨΛA Ψ−1 (1–41)

Assume that another 2n × 2n matrix B has the same eigenvectors Ψ (j) as A, whereas the
eigenvalues as stored in the diagonal matrix Λ B are different. Then, similar to (1-41), B has the
representation

B = ΨΛB Ψ−1 (1–42)

The matrix product of A and B becomes

AB = ΨΛA Ψ−1 ΨΛB Ψ−1 = ΨΛA ΛB Ψ−1 (1–43)


1.1 Fundamentals of Linear Structural Dynamics 15

Since ΛA and ΛB are diagonal matrices, matrix multiplication of these is commutative, i.e.
ΛA ΛB = ΛB ΛA . Then, (1-43) may be written

AB = ΨΛB ΛA Ψ−1 = ΨΛB Ψ−1 ΨΛA Ψ−1 = BA (1–44)

Consequently, if two matrices have the same eigenvectors, matrix multiplication of two matri-
ces is commutative. Identical eigenvectors of two matrices can also be shown to constitute the
necessary condition (the ”only if” requirement) for commutative matrix multiplication.

The so-called adjoint eigenvalue problem to (1-35) reads


AT − νi I Ψ(i)
a = 0 (1–45)

Hence, (νi , Ψa ) denotes the eigenvalue and eigenvector to the transposed matrix A T . In Box
(i)

1.2 it is shown that the eigenvalues of the basic eigenvalue problem and the adjoint eigenvalue
(j)
problem are identical, i.e. νj = λj . Further, it is shown that the eigenvectors Ψ (j) and Ψa
fulfill the orthogonality properties

T 0 , i = j
Ψ(i)
a Ψ(j) = (1–46)
mi , i=j

T 0 , i = j
Ψ(i)
a AΨ(j) = (1–47)
λi mi , i=j

where mi is denoted the complex modal mass. Without any restriction this may be chosen
as mj = 1. Then, the orthogonality conditions (1-46) and (1-47) may be assembled into the
following matrix relation

ΨTa Ψ = I (1–48)

ΨTa AΨ = ΛA (1–49)

where

Ψa = Ψ(1)
a Ψa · · · Ψa
(2) (2n)
(1–50)

From (1-48) follows that


 T
Ψa = Ψ−1 (1–51)
(i)
Hence, the eigenvectors Ψa of the adjoint eigenvalue problem (the column vectors in Ψ a )
normalized to unit modal mass are determined as the row vectors of Ψ −1 . The eigenvectors
(i)
Ψ(i) of the direct eigenvalue problem may be arbitrarily normalized. Of course, if (λ i , Ψa ) is
(i)∗
an eigen-solution to the adjoint eigenvalue problem, so is the complex conjugate (λ ∗i , Ψa ).
16 Chapter 1 – INTRODUCTION

Box 1.1: Matrix exponential function

Multiple application of (1-41) provides for k = 1, 2, . . .



A2 = AA = ΨΛA Ψ−1 ΨΛA Ψ−1 = ΨΛ2A Ψ−1 ⎪



A3 = AA2 = ΨΛA Ψ−1 ΨΛ2A Ψ−1 = ΨΛ3A Ψ−1 ⎬
.. (1–52)
. ⎪




Aj+1 = AAj = ΨΛA Ψ−1 ΨΛjA Ψ−1 = ΨΛj+1
A Ψ
−1

Λj+1
A is a product of diagonal matrices, and then becomes a diagonal matrix itself. The
diagonal elements become λj+1
k , where λk is the corresponding diagonal element in Λ A .
Consider the matrix exponential function, cf. (1-32)
t2 2 t3
eΛA t = I + tΛA + ΛA + Λ3A + · · · (1–53)
2! 3!
Since all addends on the right-hand side of (1-53) are diagonal matrices, it follows that
also eΛA t becomes diagonal with the diagonal elements
t2 2 t3 3
1 + tλk + λk + λk + · · · = eλk t (1–54)
2! 3!
where the Maclaurin series for the exponential function has been used in the last state-
ment. Then, from (1-32), (1-52) and (1-53) follows
t2 t3

eAt = Ψ I + tΛA + Λ2A + Λ3A + · · · Ψ−1 = ΨeΛA t Ψ−1 (1–55)


2! 3!
For arbitrary positive or negative t1 and t2 it then follows that

eAt1 eAt2 = ΨeΛA t1 Ψ−1 ΨeΛA t2 Ψ−1 = ΨeΛA t1 eΛA t2 Ψ−1 = ΨeΛA (t1 +t2 ) Ψ−1 =
eA(t1 +t2 ) (1–56)

(1-56) represents the fundamental multiplication rule of matrix exponential functions.


Especially for t1 = t and t2 = −t we have
 −1
eAt e−At = eA·0 = I ⇒ e−At = eAt (1–57)

Further,

A−n = A−1 · · · A−1 = ΨΛ−n


A Ψ
−1
, n = 1, 2, . . . (1–58)

(1-58) is proved by insertion of (1-52) and (1-58) into the identity A n A−n = I. As seen,
eAt and A−n have identical eigenvectors. Then, from (1-44) it follows that

A−n eAt = eAt A−n , n = 1, 2, . . . (1–59)


1.1 Fundamentals of Linear Structural Dynamics 17

Box 1.2: Proof of orthogonality properties of eigenvectors and adjoint eigenvectors


(i) T
(1-35) is pre-multiplied with Ψ a , and (1-45) is pre-multiplied with Ψ (j) T , lead-
ing to the identities
T T
Ψ(i)
a AΨ(j) = λj Ψ(i)
a Ψ(j) (1–60)

Ψ(j) T AT Ψ(i)
a = νi Ψ
(j) T
Ψ(i)
a ⇒
T T
Ψ(i)
a AΨ(j) = νi Ψ(i)
a Ψ(j) (1–61)

The last statement follows from transposing the previous one. Withdrawal of (1-61) from
(1-60) provides
  T
λj − νi Ψ(i)
a Ψ(j) = 0 (1–62)
(i) T
For i = j, (1-62) can only be fulfilled for νi = λi , since Ψa Ψ(i) = 0.

Next, presume simple eigenvalues, so λ i = λj . Then, for i = j, (1-62) can only be


(i) T
fulfilled, if Ψa Ψ(j) = 0, corresponding to (1-46).

Since the right-hand side of (1-60) is zero for i = j, this must also hold true for the left-
(i) T
hand side, i.e. Ψa AΨ(j) = 0 for i = j. Then for i = j, (1-60) provides the result
(i) T
Ψa AΨ(i) = λi mi , which completes the proof of (1-47).

Example 1.1: Equations of motion of linear viscous damped 2DOF system

f1 f2
k1 k2 k3
m1 m2
c1 c2 c3
x1 x2

f1 f2
k1 x1 k2 (x2 − x1 ) k3 x2
m1 m2
c1 ẋ1 c2 (ẋ2 − ẋ1 ) c3 ẋ2

Fig. 1–1 Equation of motion of linear viscous damped 2DOF system.

The two-degree-of-freedom system shown on Fig. 1-1 consists of the masses m 1 and m2 connected with linear
elastic springs with the spring constants k 1 , k2 , k3 , and linear viscous damper elements with the damper constants
18 Chapter 1 – INTRODUCTION

c1 , c2 , c3 . The displacement of the masses from the static equilibrium state are denoted as x 1 (t) and x2 (t). The
velocities ẋi (t) and accelerations ẍi (t) are considered positive in the same direction as the displacements x i (t)
and the external forces f i (t). The masses are cut free from the springs and dampers in the deformed state, and
the damper- and spring forces are applied as equivalent external forces. Next, Newton’s 2nd law of motion is
formulated for each of the masses leading to


m1 ẍ1 = −k1 x1 + k2 (x2 − x1 ) − c1 ẋ1 + c2 (ẋ2 − ẋ1 ) + f1 (t)
(1–63)
m2 ẍ2 = −k3 x2 − k2 (x2 − x1 ) − c3 ẋ2 − c2 (ẋ2 − ẋ1 ) + f2 (t)

(1-63) may be formulated as the following matrix differential equations


Mẍ(t) + Cẋ(t) + Kx(t) = f (t) , t > t0 ⎪



    ⎪



x1 (t) f1 (t) ⎪

x(t) = , f (t) =
x2 (t) f2 (t) (1–64)

     ⎪



m1 0 c1 + c2 −c2 k + k2 −k2 ⎪

M= , C= , K= 1 ⎪


0 m2 −c2 c2 + c3 −k2 k2 + k3

For each of the masses an initial displacement x i (t0 ) = xi,0 from the static equilibrium state and an initial velocity
ẋi (t0 ) = ẋi,0 are specified. These are assembled into the following initial value vectors

   
x ẋ1,0
x0 = x(t0 ) = 1,0 , ẋ0 = ẋ(t0 ) = (1–65)
x2,0 ẋ2,0

The presented system will be further analyzed in various numerical examples throughout the book.

Example 1.2: Discretized equations of motion of a vibrating string

∆l

F 1 2 j n−1 n
F
u1 u2 u(x, t) uj uj+1 un−2 un−1

l
x

Fig. 1–2 Discretization of vibrating string.

Fig. 1-2 shows a vibrating string with the pre-stress force F , and the mass per unit length µ. The string has been
divided into n identical elements, each of the length ∆l. Hence, the total length of the string is l = n∆l. The
displacement u(x, t) of the string at the position x and time t in the transverse direction is given by the wave
equation with homogeneous boundary conditions 1


∂2u ∂2u ⎪
µ 2
−F 2 =0 , x ∈]0, l[ ⎬
∂t ∂x (1–66)


u(0, t) = u(l, t) = 0
1.1 Fundamentals of Linear Structural Dynamics 19

where x is measured from the left support point. The spatial differential operator in (1-66) is discretized by means
of a central difference operator, 2 i.e.

∂ 2 u(xi , t) F  
F 2
2
ui+1 − 2ui + ui−1 , i = 1, . . . , n − 1 (1–67)
∂x ∆l

where ui (t) = u(xi , t), xi = i∆l. Further, let ü i (t) = ∂t∂2 u(xi , t). The boundary conditions imply that u 0 (t) =
un (t) = 0. Then, the discretized wave equation may be represented by the matrix differential equation

Mẍ(t) + Kx(t) = 0 (1–68)

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
u1 (t) 1 0 0 ··· 0 0 2 −1 0 ··· 0 0
⎢ ⎥ ⎢0 0 ··· 0 0⎥ ⎢−1 2 −1 · · · 0⎥
⎢ ⎥
u2 (t) ⎢ 1 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥
u3 (t) ⎢0 0 1 ··· 0 0⎥ F ⎢ 0 −1 2 ··· 0 0⎥

x(t) = ⎢ ⎥ ⎢ ⎥ ⎢ ..⎥
⎥ , M = µ ⎢ ..
.. .. .. . . .. .. ⎥ , K = ∆l2 ⎢ .. .. .. . . .. ⎥
⎢ ⎥
. ⎢. . . . . . ⎥ ⎢ . . . . . .⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣un−2 (t)⎦ ⎣0 0 0 ··· 1 0⎦ ⎣ 0 0 0 ··· 2 −1⎦
un−1 (t) 0 0 0 ··· 0 1 0 0 0 · · · −1 2
(1–69)

Alternatively, the wave equation may be discretized by means of a finite element approach. Assuming linear inter-
polation between the nodal values stored in the vector x(t), and using the same interpolation for the displacement
field and the variational field (Galerkin variation), the following mass- and stiffness matrices are obtained

⎡ ⎤ ⎡ ⎤
4 1 0 ··· 0 0 2 −1 0 ··· 0 0
⎢1 · · · 0 0⎥ ⎢−1 2 −1 · · · 0⎥
⎢ 4 1 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥
µ∆l ⎢0
⎢.
1 4 · · · 0 0⎥
⎥ F ⎢⎢ 0 −1 2 · · · 0 0 ⎥

M= ⎢ .. .. ⎥ , K = ⎢ (1–70)
6 ⎢ .. .. ..
. .
..
. . .⎥
.
∆l ⎢ .. .
.. . .
.. . . .
.. ..⎥
.

⎢ ⎥ ⎢ ⎥
⎣0 0 0 · · · 4 1⎦ ⎣ 0 0 0 ··· 2 −1⎦
0 0 0 ··· 1 4 0 0 0 · · · −1 2

(1-70) represents the so-called consistent mass matrix, for which the same interpolation algorithm is used for dis-
cretizing the kinetic and the potential energy. 1 By contrast the diagonal mass matrix in (1-69) is referred to as a
lumped mass matrix. As seen the central difference operator and Galerkin variation with piecewise linear inter-
polation leads to the same stiffness matrix. The presented system will be further analyzed in various numerical
examples in what follows.

The calculated eigenvalues based on the system matrices (1-69) and (1-70) are shown in Fig. 1-3 as a function of
the number of elements n. The solutions based on the lumped mass matrix (1-69) and the consistent mass (1-70)
are shown with dotted and dashed signature, respectively. The numerical solutions have been given relative to the
analytical solutions


F
ωj,a = jπ , j = 1, . . . , 4 (1–71)
µl2

As seen, the consistent mass matrix provides upper-bounds in accordance with the Rayleigh-Ritz principle de-
scribed in Section 4.2. By contrast the lumped mass matrix provides lower bounds, when used in combination
with the consistent stiffness matrix. There is no formal proof of this property, which merely is an empirical obser-
vation fulfilled in many dynamical problems. The indicated observation immediately suggest that an improvement
20 Chapter 1 – INTRODUCTION

of the numerical solutions may be obtained by using a linear combination of the consistent and the lumped mass
matrix. Typically, the mean value is used leading to the mass matrix

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
4 1 0 ··· 0 0 1 0 0 ··· 0 0 10 1 0 · · · 0 0
⎢1 1 ··· 0⎥ ⎢0 · · · 0 0⎥ ⎢ 1 10 1 · · · 0 0 ⎥
⎢ 4 0 ⎥ ⎢ 1 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
1 µ∆l ⎢
⎢.
0 1 4 ··· 0 0⎥ 1 ⎢0 0 1 · · · 0 0⎥ µ∆l ⎢ 0 1 10 · · · 0 0 ⎥
M=
2 6 ⎢ .. .. . . .. ⎥
.. ⎥+ 2 µ∆l ⎢
⎢ .. .. .. ..
⎥ ⎢
. . ⎥ = 12 ⎢ .. .. .. . . . .. ⎥

⎢ .. . . . . .⎥ ⎢. . . . .. .. ⎥ ⎢. . . . .. .⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 0 ··· 4 1 ⎦ ⎣0 0 0 · · · 1 0⎦ ⎣ 0 0 0 · · · 10 1 ⎦
0 0 0 ··· 1 4 0 0 0 ··· 0 1 0 0 0 · · · 1 10
(1–72)

(1-72) is solved with the consistent stiffness matrix (1-70). The results are showed with a dashed-dotted signature
on Fig. 1-3. As expected the results show a significant improvement. A theoretical argument for using the mean
value of the consistent and lumped mass matrices for the combined mass matrix has been given by Krenk. 3

1.02
1.06

1.04
1.01
1.02
ω1 /ω1,a

ω2 /ω2,a
1 1

0.98
0.99
0.96

0.94
0.98
5 10 15 20 25 30 5 10 15 20 25 30
n n

1.2
1.1

1.05 1.1
ω3 /ω3,a

ω4 /ω4,a

1 1

0.95 0.9
0.9
0.8
0.85
5 10 15 20 25 30 5 10 15 20 25 30
n n
Fig. 1–3 Undamped eigenvibrations of string. —: Analytical solution. - - - : Consistent mass matrix. .... :
Lumped mass matrix. -.-. : Combined mass matrix.

3
S. Krenk: Dispersion-corrected explicit integration of the wave equation. Computer Methods in Applied
Mechanics and Engineering, 191, pp. 975-987, 2001.
1.1 Fundamentals of Linear Structural Dynamics 21

Example 1.3: Verification of eigensolutions

Given the following mass- and stiffness matrices


   
5
0 5 −2
M= 4 1 , K= (1–73)
0 5 −2 2

Verify that the eigensolutions with modal masses normalized to 1 are given by

     
  4 2
ω2 0 2 0
Λ= 1 = , Φ = Φ(1) Φ(2) = 5 5 (1–74)
0 ω22 0 12 1 −2

Based on the proposed eigensolutions the following calculations are performed, cf. (1-14)


    ⎫
5 −2 45 2
2 6 ⎪

KΦ = 5 = ⎪

−2 −2 2
− 24 ⎪

2 1 5 5 ⎬
     ⎪ (1–75)

5
0 4 2
2 0 2 6 ⎪


MΦΛ = 4 5 5 = 2 ⎪
0 1
5 1 −2 0 12 5 − 24 ⎭
5

This proofs the validity of the proposed eigensolutions. The orthonormality follows from the following calcula-
tions, cf. (1-19) and (1-21)

 T      ⎫
4 2 5
0 4 2
1 0 ⎪

T
Φ MΦ = 5 5 4 5 =5 ⎪

−2 1
−2 ⎪

1 0 5 1 0 1 ⎬
(1–76)
 T   ⎪
⎪ 
4 2
5 −2 45 2
2 0 ⎪⎪


ΦT KΦ = 5 5 =5

1 −2 −2 2 1 −2 0 12

Example 1.4: M- and K-orthogonal vectors

Given the following mass- and stiffness matrices


⎡ ⎤ ⎡ ⎤
1
0 0 2 −1 0
⎢ 2
⎥ ⎢ ⎥
M = ⎣ 0 1 0 ⎦ , K = ⎣−1 4 −1⎦ (1–77)
0 0 12 0 −1 2
Additionally, the following vectors are considered

⎡ ⎤ ⎡ ⎤
1 1
⎢ √2 ⎥ ⎢ √2 ⎥
v1 = ⎣ 2 ⎦ , v2 = ⎣− 2 ⎦ (1–78)
0 0
From (1-78) the following matrix is formed

⎡ ⎤
1 1
⎢√ √ ⎥
V = [v1 v2 ] = ⎣ 22 − 2 ⎦
2 (1–79)
0 0
22 Chapter 1 – INTRODUCTION

We may then perform the following calculations, cf. (1-19) and (1-21)

⎡ ⎤T ⎡ 1 ⎤⎡ ⎤ ⎫
1 1 0 0 1 1   ⎪

2 ⎪

⎢√ √
2⎥ ⎢ ⎥⎢√ √ ⎥ 1 0 ⎪

VT MV = ⎣ 22 − 2 ⎦ ⎣0 1 0 ⎦ ⎣ 22 − 2 ⎦
2 = ⎪

0 1 ⎪



0 0 0 0 1
2 0 0 ⎬
(1–80)
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪
1 1 2 −1 0 1 1  ⎪



⎢ √2 √ ⎥ ⎢ ⎥ ⎢ √2 √ ⎥ ⎪

T
V KV = ⎣ 2 − 22 ⎦ ⎣−1 4 −1⎦ ⎣ 2 − 2 ⎦=
2 2.5858 0 ⎪


0 5.4142 ⎪


0 0 0 −1 2 0 0
(1-80) shows that the vectors v 1 and v2 are mutual orthogonal with weights M and K, and that both have been
normalized to unit modal mass. As will be shown in Example 1.5 neither v 1 nor v2 are eigenmodes, and the eigen-
values are different from 2.5858 and 5.4142. However, if three linear independent vectors are mutual orthogonal
weighted with the three-dimensional matrices M and K, they will be eigenmodes to the system.

Example 1.5: Analytical calculation of eigensolutions


The mass- and stiffness matrices defined in Example 1.4 are considered again. Now, an analytical solution of the
eigenmodes and eigenvalues is wanted.

The generalized eigenvalue problem (1-9) becomes

⎡ ⎤ ⎡ (j) ⎤ ⎡ ⎤
2 − 12 λj −1 0 Φ1 0
⎢ ⎥ ⎢ (j) ⎥ ⎢ ⎥
⎣ −1 4 − λj −1 ⎦ ⎣Φ2 ⎦ ⎣0⎦ = (1–81)
(j)
0 −1 2 − 12 λj Φ3 0

The characteristic equation (1-10) becomes

⎛⎡ ⎤⎞
2 − 12 λj −1 0
⎜⎢ ⎥⎟
P (λ) = det ⎝⎣ −1 4 − λj −1 ⎦⎠ =
0 −1 2 − 12 λj
1
  1

1

1
1

2 − λj 4 − λj 2 − λj − 1 + 1 · − 2 − λj = 2 − λj 6 − 4λj + λ2j = 0 ⇒
2 2 2 2 2


⎨2 , j = 1
λj = 4 , j = 2 (1–82)


6 , j=3
(j)
Initially, the eigenmodes are normalized by setting an arbitrary component to 1. Here we shall choose Φ 3 = 1.
(j) (j)
The remaining components Φ 1 and Φ2 are then determined from any two of the three equations (1-81). The
first and the second equations are chosen, corresponding to

⎡ ⎡ ⎤
   (j)    (j) ⎤ 2
Φ1 2
2 − 12 λj −1 Φ1 0 ⎢ (j) ⎥ ⎢ 14−8λ j +λj

= ⇒ ⎣ Φ2 ⎦ = ⎢ 4−λj 2 ⎥ (1–83)
(j) ⎣ 14−8λj +λj ⎦
−1 4 − λj Φ2 1 (j)
Φ3 1

The modal matrix with eigenmodes normalized as indicated in (1-83) is denoted as Φ̄. This becomes
1.1 Fundamentals of Linear Structural Dynamics 23

⎡ ⎤
1 −1 1
⎢ ⎥
Φ̄ = ⎣1 0 −1⎦ (1–84)
1 1 1

The modal masses become, cf. (1-19)

⎡ ⎤
2 0 0
T ⎢ ⎥
m = Φ̄ MΦ̄ = ⎣0 1 0⎦ (1–85)
0 0 2

Φ(1) denotes the 1st eigenmode normalized to unit modal mass. This is related to Φ̄(1) in the following way

⎡ ⎤
1
1 1 ⎢ ⎥
Φ(1) = √ Φ̄(1) = √ ⎣1⎦ (1–86)
M1 2
1

The other modes are treated in the same manner, which results in the following eigensolutions

⎡ 2 ⎤
⎡ ⎤ ⎡ √2 √ ⎤
ω1 0 0 2 0 0   2 −1 2
⎢ ⎥ ⎢ ⎥ ⎢ √2 √2 ⎥
Λ=⎣0 ω22 0 ⎦ ⎣0
= 4 0⎦ , Φ = Φ(1) Φ(2) Φ(3) = ⎣ 2 0 − 22 ⎦ (1–87)
√ √
2 2
0 0 ω32 0 0 6 2 1 2

Example 1.6: Undamped and damped eigenvibrations of 2DOF system

100 N/m 200 N/m 300 N/m


1 kg 2 kg
3 kg/s 2 kg/s 1 kg/s
x1 x2

Fig. 1–4 Eigenvibrations of 2DOF system.

The system in Example 1.1 is considered again with the structural parameters defined in Fig. 1-3. The mass-
damping and stiffness matrices become, cf. (1-64)

    
1 0 5 −2 kg 300 −200 N
M= kg , C= , K= (1–88)
0 2 −2 3 s −200 500 m

The eigensolutions with modal masses normalized to 1 become

     
ω2 0 131.39 0   0.64262 0.76618
Λ= 1 = s−2 , Φ= Φ (1)
Φ (2)
= (1–89)
0 ω22 0 418.61 0.54177 −0.45440
24 Chapter 1 – INTRODUCTION

The matrix A defined by (1-27) becomes

⎡ ⎤
0 0 1 0
 ⎢  ⎥
0 I ⎢ 0 0 0 1 ⎥
A= =⎢


⎥ (1–90)
−M−1 K −1
−M C ⎣−300 200 −5.0 2.0⎦
100 −250 1.0 −1.5

The eigenvalues and eigenfunctions become

⎡ ⎤
λ1 0 0 0
⎢ ⎥
⎢0 λ2 0 0⎥
ΛA = ⎢ ⎥=
⎣0 0 λ3 0⎦
0 0 0 λ4
⎡ ⎤ (1–91)
−0.7763 + 11.480i 0 0 0
⎢ ⎥
⎢ 0 −2.4737 + 20.231i 0 0 ⎥
⎢ ⎥
⎣ 0 0 −0.7763 − 11.480i 0 ⎦
0 0 0 −2.4737 − 20.231i

⎡ ∗ ∗

Φ(1) Φ(2) Φ(1) Φ(2)
Ψ = Ψ(1) Ψ(2) Ψ(3) Ψ(4) = ⎣ ∗ ∗
⎦=
λ1 Φ(1) λ2 Φ(2) λ∗1 Φ(1) λ∗2 Φ(2)
⎡ ⎤
1.1693 − 0.1414i −1.6846 − 0.3657i 1.1693 + 0.1414i −1.6846 + 0.3657i (1–92)
⎢ ⎥
⎢ 1 1 1 1 ⎥
⎢ ⎥
⎢ ⎥
⎣ 0.7153 + 13.534i 11.565 − 33.177i 0.7153 − 13.534i 11.565 − 33.177i⎦
−0.7763 + 11.480i −2.4737 + 20.231i −0.7763 − 11.480i −2.4737 − 20.231i

As seen from (1-92) the second component of the sub-vectors Φ (1) and Φ(2) has been normalized to one. Hence,
the entire modal matrix with 16 components is defined by merely 4 entities, namely the the first component of the
sub-vectors Φ(1) and Φ(2) and the eigenvalues λ 1 and λ2 .

The eigenvectors of the adjoint eigenvalue problem follows from (1-51) and (1-92)

 T (1) (2) ∗ ∗
Ψa = Ψ−1 = Ψ(1)
a Ψa
(2)
Ψ(3)
a Ψa
(4)
= Ψa Ψa Ψ(1)
a Ψ(2)
a =
⎡ ⎤
0.1723 − 0.0430i −0.1723 + 0.0388i 0.1723 + 0.0430i −0.1723 − 0.0388i
⎢ ⎥ (1–93)
⎢ 0.3007 + 0.0411i 0.1993 − 0.0592i 0.3007 − 0.0411i 0.1993 + 0.0592i⎥
⎢ ⎥
⎣−0.0004 − 0.0154i 0.0004 + 0.0087i −0.0004 + 0.0154i 0.0004 − 0.0087i⎦
0.0025 − 0.0260i −0.0025 − 0.0098i 0.0025 + 0.0260i −0.0025 + 0.0098i

(3) (3) (1) (2)


As seen Ψa and Ψa become the complex conjugates of Ψ a and Ψa , cf. remarks subsequent to (1-51).
1.2 Solution of Initial Value Problem by Modal Decomposition Techniques 25

1.2 Solution of Initial Value Problem by Modal Decom-


position Techniques
Assume that undamped eigenmodes Φ (i) in addition to the orthogonality properties (1-17) and
(1-18) also are orthogonal weighted with the damping matrix, i.e.


0 , i = j
Φ(i) T CΦ(j) = (1–94)
2ζiωi Mi , i=j

ζi denotes the modal damping ratio. In practice (1-94) is fulfilled, if the structure is lightly
damped and the eigenfrequencies are well separated.1 The orthogonality properties may be
assembled into the following matrix relation similar to (1-19) and (1-21)

ΦT CΦ = c (1–95)

where

⎡ ⎤
2ω1 ζ1 M1 0 ··· 0
⎢ ··· ⎥
⎢ 0 2ω2 ζ2 M2 0 ⎥
c=⎢ . .. .. .. ⎥ (1–96)
⎣ .. . . . ⎦
0 0 · · · 2ωn ζn Mn

The undamped eigenmodes are linear independent and may be used as a basis in the n-dimensional
vector space. Hence, the displacement vector x(t) may be written as

⎡ ⎤
q1 (t)
%
n ⎢ q (t) ⎥
⎢ 2 ⎥
x(t) = Φ(j) qj (t) = Φq(t) , q(t) = ⎢ . ⎥ (1–97)
j=1
⎣ .. ⎦
qn (t)

where q1 (t), . . . , qn (t) represent the undamped modal coordinates, i.e. the coordinates in the
vector basis formed by the undamped eigenmodes Φ (1) , . . . , Φ(n) . Insertion of (1-95) into (1-
1), followed by a pre-multiplication with Φ T and use of (1-19), (1-21), (1-94), provides the
following matrix differential equation for the modal coordinates


mq̈(t) + cq̇(t) + kq(t) = F(t) , t > t0
(1–98)
q(t0 ) = Φ−1 x0 , q̇(t0 ) = Φ−1 ẋ0

where
26 Chapter 1 – INTRODUCTION

⎡ ⎤
F1 (t)
⎢ F (t) ⎥
T ⎢ 2 ⎥
F(t) = Φ f(t) = ⎢ . ⎥ (1–99)
⎣ .. ⎦
Fn (t)

F1 (t), . . . , Fn (t) are denoted the modal load. Since m, c and k are diagonal matrices the
component differential equations related to (1-98) decouple completely. This is caused by the
orthogonality condition (1-94) for which reason this relation is referred to as the decoupling
condition. The differential equation for the kth modal coordinate reads

Mk q̈k (t) + 2ζk ωk q̇k (t) + ωk2 qk (t) = Fk (t) , k = 1, . . . , n (1–100)

Hence, the decoupling condition reduces the integration of a linear n degrees-of-freedom sys-
tem to the integration of n single-degree-of-freedom oscillators.

Typically, the dynamic response is carried by lowest modes in the expansion (1-97). Assume
that the modal response above the first n1
n may be disregarded. Then (1-97) reduces to

%
n1
x(t) Φ(j) qj (t) = Φ1 q1 (t) (1–101)
j=1

where Φ1 is a reduced modal matrix of dimension n × n 1 , and q1 (t) is a sub-vector of modal


coordinates defined as

⎡ ⎤
q1 (t)
⎢ q (t) ⎥
⎢ 2 ⎥
Φ1 = Φ(1) Φ(2) · · · Φ(n1 ) , q1 (t) = ⎢ . ⎥ (1–102)
⎣ .. ⎦
qn1 (t)
(1-101) completely ignores the influence of the higher modes. Although the dynamic response
of these modes are ignorable, they may still influence the low-frequency components via a
quasi-static response component. A consistent correction taken this effect into consideration
reads1

& '
%
n1 %
n1
1
x(t) Φ(j) qj (t) + K−1 − Φ(j) Φ(j) T f(t) (1–103)
j=1
ω 2 Mj
j=1 j

(1-103) may be represented in terms of the following equivalent matrix formulation

x(t) Φ1 q1 (t) + K−1 − Φ1 k−1


1 Φ T
1 f(t) (1–104)
1.2 Solution of Initial Value Problem by Modal Decomposition Techniques 27

where

⎡ 2 ⎤
ω1 M1 0 ··· 0
⎢ 0 2
··· ⎥
⎢ ω2 M2 0 ⎥
k1 = ⎢ . .. .. .. ⎥ (1–105)
⎣ .. . . . ⎦
0 0 · · · ωn2 1 Mn1

Both (1-101) and (1-103) requires knowledge of the first n1 eigen-pairs (ωj2 , Φ(j) ). The corre-
sponding modal coordinates are determined from the first n 1 equations in (1-100).
Correspondingly, the 2n eigenvectors Ψ(j) , j = 1, . . . , 2n of the matrix A form a vector basis
in the 2n-dimensional vector space. Then, the state vector z(t) admits the representation

%
2n
z(t) = Ψ(j) qj (t) = Ψq(t) (1–106)
j=1

where

⎡ ⎤
q1 (t)
⎢ q (t) ⎥
⎢ 2 ⎥
q(t) = ⎢ . ⎥ (1–107)
⎣ .. ⎦
q2n (t)

q1 (t), . . . , q2n (t) represent the damped modal coordinates, i.e. the coordinates in the vector
basis made up of the damped eigenmodes Ψ (1) , . . . Ψ(2n) . Insertion of (1-106) into (1-26),
followed by pre-multiplication of Ψ Ta and use of (1-48), (1-49), the following matrix differential
differential equations for the damped modal coordinates is obtained


q̇(t) = ΛA q(t) + G(t) , t > t0
(1–108)
q(t0 ) = Ψ−1 z0 = ΨTa z0

where

⎡ ⎤
G1 (t)
⎢ G (t) ⎥
⎢ 2 ⎥
G(t) = ΨTa F(t) = ⎢ . ⎥ (1–109)
⎣ .. ⎦
G2n (t)

In the initial value statement of (1-106) the relation (1-51) between the adjoint and direct modal
(j)T
matrices has been used. Gj (t) = Ψa F(t) denotes the jth damped modal load.
28 Chapter 1 – INTRODUCTION

(1-108) indicates 2n decoupled complex 1st order differential equations. The differential equa-
tion for the jth modal coordinate reads

q̇j (t) = λj qj (t) + Gj (t) , j = 1, . . . , 2n (1–110)


 (j+n)   (j)∗ 
Since, λj+n , Ψa = λ∗j , Ψa for n = 1, . . . , n, it follows that Gj+n (t) = G∗j (t), and
in turn that qj+n (t) = qj∗ (t). Hence, merely the first n differential equations (1-110) need to be
integrated. Then, (1-104) may be written

& '
%
n
z(t) = 2Re Ψ(j) qj (t) (1–111)
j=1

As is the case for expansion in undamped modal coordinates the response is primarily carried
by the lowest n1 modes leading to the following reduced form of (1-111)

& '
%
n1
z(t) 2Re Ψ1 q1 (t) (1–112)
j=1

where

⎡ ⎤
q1 (t)
⎢ q (t) ⎥
⎢ 2 ⎥
Ψ1 = Ψ(1) Ψ(2) · · · Ψ(n1 ) , q1 (t) = ⎢ . ⎥ (1–113)
⎣ .. ⎦
qn1 (t)

(1-101), (1-103) and (1-112) describes the dynamic system with less coordinates than the orig-
inal formulation (1-1). For this reason such formulations are referred to as system reduction
schemes. A system reduction scheme with due consideration to the quasi-static response may
also be formulated as a correction to (1-112).1

1.3 Conclusions
On condition that the convolution integral is evaluated numerically an analytical solution to
the initial value problem (1-1) is provided by the result (1-3). Since this solution relies on the
Fourier transform of the frequency matrix (1-6) for the impulse response matrix, the approach
becomes computational prohibitive for a large number of degrees of freedom. Alternatively,
if the initial value problem is reformulated in the state vector form (1-26) the analytical solu-
tion (1-29) is obtained. This solution relies on the fundamental matrix in terms of the matrix
exponential function for the corresponding homogeneous differential system (1-28). The ma-
trix exponential function may be calculated analytically as indicated by (1-55), but the solution
requires all eigen-solutions to the system matrix A. Again, the calculation of these becomes
1.3 Conclusions 29

prohibited for large systems. Hence, both analytical or semi-analytical solution approaches are
out of the question for large degree-of-freedom systems.

The state vector formulation (1-26) directly admits the application of vectorial generaliza-
tions of standard ordinary differential equation solvers such as the Euler method, the extended
Euler method, the various Runge-Kutta algorithms or the Adams-Bashforth/Adams-Moulton
algorithm.2 As is the case for all conditional stable algorithms the numerical stability of these
schemes is determined by the length of the time step in proportion to the eigenperiod of highest
mode of the system. Hence, in order to insure stability for large scale systems excessive small
time steps becomes necessary, which means that the high accuracy of some of these algorithms
cannot be utilized. Consequently, there is a need for numerical matrix differential solvers for
which the length of the time step is determined from accuracy rather than stability. These al-
gorithms predict stable although inaccurate responses for the highest modes. Instead, the time
step is adjusted to predict accurate results for the lowest modes, which carry the global response
of the structure. The devise of such algorithms will be the subject of Chapter 2.

System reduction schemes such as (1-101), (1-103) and (1-112) require a limited number of
low-frequency eigen-pairs to be know. Since, the high frequency components have been filtered
out the numerical integration of the modal coordinate differential equations (1-100) and (1-110)
may be performed by standard ordinary differential solvers or by modification of the methods
devised in Chapter 2. Hence, the primary obstacle in using these methods is the determination
of the low frequency eigen-pairs. This problem will be the subject of the Chapters 3-7 of the
book. Moreover, only solutions to the GEVP (1-9) will be considered, i.e. the involved system
matrices are assumed to be symmetric and non-negative definite.
30 Chapter 1 – INTRODUCTION

1.4 Exercises
1.1 Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
1 0 0 2 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 0 ⎦ , K = ⎣−1 2 0⎦
0 0 12 0 0 3

(a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
(b.) Determine two vectors that are M-orthonormal without being eigenmodes.

1.2 The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional
generalized eigenvalue probem are given as
    √ √ 
λ1 0 1 0 (1) (2) 2 2
Λ= = , Φ= Φ Φ = √22 √2
0 λ2 0 4 2
− 22

(a.) Calculate M and K.

1.3 Write a MATLAB program, which solves the undamped generalized eigenvalue problem
for the vibrating string problem considered in Example 1.2 for both the finite difference and
the finite element discretized equations of motion. The circular eigenfrequencies should
be presented in ascending order of magnitude, and the related eigenmodes should be nor-
malized to unit modal mass.

(a.) Use the program to evaluate the 4 lowest circular eigenfrequencies of the string as a
function of the number of elements n for both discretization methods, and compare
the numerical results with the analytical solution (1-71).
(b.) Based on the obtained results suggest a mass matrix, which will do better.

1.4 Write a MATLAB program, which solves the undamped and damped generalized eigen-
value problems considered in Example 1.6.
C HAPTER 2
NUMERICAL INTEGRATION OF
EQUATIONS OF MOTION

This chapter deals with the numerical time integration in the finite interval [t 0 , t0 + T ] of the
initial value problem (1-1). The solution is searched for . The idea of the numerical integration
scheme is to determine the solution of (1-1) approximately at the discrete instants of time t j =
t0 + j∆t, j = 1, 2, . . . , n, where ∆t = T /n. To facilitate notations the following symbols are
introduced

xj = x(tj ) , ẋj = ẋ(tj ) , ẍj = ẍ(tj ) , fj = f(tj ) , j = 0, 1, . . . , n (2–1)

Single step algorithms in numerical time integration in structural dynamics determines the dis-
placement vector xj+1 , the velocity vector ẋj+1 and the acceleration vector ẋj+1 at the new
time tj+1 , on condition of knowledge of xj , ẋj , ẋj at the previous instant of time, as well as
the load vectors fj and fj+1 at the ends of the considered sub-interval [tj , tj+1 ]. In multi step
algorithms the solution at the time t j+1 also depends on one or more solutions prior to the time
tj . Additionally, distinction will be made between single value algorithms, which solves solely
for the displacement vector xj , and multi value algorithms, where the solution is obtained for
a state vector encompassing the displacement vector x j , the velocity vector ẋj , and in some
cases even the acceleration vector ẍj . Classical algorithms in numerical analysis such as the
vector generalization of the Runge-Kutta methods1 may be used for the solution of (1-1). How-
ever, given that large scale structural models contain very high frequency components, these
schemes become numerical unstable unless extremely small time steps are used. For this reason
the devise of useful algorithms in structural dynamics is governed by different objectives than
in numerical analysis, as will be further explained below.

Newmark algorithms2 treated in Section 2.1 are probably the most widely used algorithms in
structural dynamics for solving (1-1). The derived single step multi value formulation of the
methods serves as a generic example for specification of accuracy, stability, and numerical
damping.

1
D.G. Zill and M.R. Cullen: Differential Equations with Boundary-Value Problems, 5th Ed. Brooks/Cole,
2001.
2
N.M. Newmark: A Method of Computation for Structural Dynamics. J.Eng.Mech., ASCE, 85(EM3), 1959,
67-94.

— 31 —
32 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

High frequencies and mode shapes of the spatially discretized equations (1-1) does not rep-
resent the behavior of the underlying physical problem very well. The corresponding modal
components merely behave as numerical noise at the top of the response carried by the lower
frequency modes. For this reason it is desirable to filter these components out of the response.
In numerical time integrators in structural dynamics this is achieved by the introduction of nu-
merical (artificial) damping, which are affecting merely the high frequency modes. However,
it turns out that numerical damping cannot be introduced in the Newmark algorithms without
compromising the accuracy of the response of the lower modes. Several suggestions to rem-
edy this problem have been suggested. Here, we shall consider the so-called generalized alpha
algorithm suggested by Chung and Hulbert, 3 which seems to be the most favorable single step
single valued algorithm for this purpose. The outline of the text relies primarily on the mono-
graphs of Hughes 4, 5 and Krenk.6

2.1 Newmark Algorithm


The Newmark family consists of the following equations

Mẍj+1 + Cẋj+1 + Kxj+1 = fj+1 , j = 1 , . . . , n (2–2)




1
xj+1 = xj + ẋj ∆t + − β ẍj + β ẍj+1 ∆t2 (2–3)
2
 

ẋj+1 = ẋj + 1 − γ ẍj + γ ẍj+1 ∆t (2–4)

(2-2) indicates the differential equation at the time t j+1 , which is required to be fulfilled for the
new solution for ẍj+1 , ẋj+1 , xj+1 . (2-3) and (2-4) are approximate Taylor expansions, which
have been derived in Box 2.2. The parameters β and γ determines the numerical stability and
accuracy of the algorithms. The Newmark family contains several wellknown numerical algo-
rithms as special cases. Examples are the central difference algorithm treated in Example 2.2,
which corresponds to (β, γ) = (0, 12 ), and the Crank-Nicolson algorithm treated in Example
2.3, which corresponds to (β, γ) = ( 14 , 12 ).

There are several implementations of the methods. The most useful is the following single step
single value implementation. At first, define the following predictors

3
J. Chung and G.M. Hulbert: A time Integration Algorithm for Structural Dynamics with Improved Numerical
Dissipation: The Generalized α Method. Journal of Applied Mechanics, 60, 1993, 371-375.
4
T.J.R. Hughes: The Finite Element Method. Linear Static and Dynamic Finite Element Analysis. Printice-Hall,
Inc., 1987.
5
T.J.R. Hughes: Analysis of Transient Algorithms with Particular Reference to Stability Behavior. Chapter
2 in Computational Methods for Transient Analysis. Vol. 1 in Computational Methods in Mechanics, Eds. T.
Belytschko and T.J.R. Hughes, North-Holland, 1983.
6
S. Krenk: Dynamic Analysis of Structures. Numerical Time Integration. Lecture Notes, Department of Me-
chanical Engineering, Technical University of Denmark, 2005.
2.1 Newmark Algorithm 33

x̄j+1 = xj + ẋj ∆t + − β ∆t2 ẍj (2–5)


2
 
x̄˙ j+1 = ẋj + 1 − γ ∆t ẍj (2–6)
(2-5) and (2-6) specify predictions (preliminary solutions) for x j+1 and ẋj+1 based on the infor-
mation available at the time tj . The idea of the algorithm is to insert (2-3) and (2-4) into (2-2).
Given that the solution is required to fulfill the equations of motion at the time t j+1 , and using
(2-5) and (2-6), the following equations are obtained for the new acceleration vector in terms of
known solution quantities from the previous time and the load vector f j+1

M + γ∆tC + β∆t K ẍj+1 = fj+1 − Cx̄˙ j+1 − Kx̄j+1


2
(2–7)

Next, based on the solution for ẍj+1 obtained from (2 − 7) corrected (new) solutions for ẋj+1
and xj+1 may be obtained from (2-3) and (2-4). These may be written as

xj+1 = x̄j+1 + β∆t2 ẍj+1 (2–8)


ẋj+1 = x̄˙ j+1 + γ∆t ẍj+1 (2–9)
To start the algorithm the acceleration ẍ0 at the time t0 is needed. This is obtained from the
equation of motion

Mẍ0 = f0 − Cẋ0 − Kx0 (2–10)


The algorithm has been summarized in Box 2.1. In stability and accuracy analysis a single step
multi value formulation for the state vector made up of the displacement and velocity vectors
is preferred. In order to derived this, eqs. (2-3) and (2-4) are multiplied with M. Next, the
accelerations are eliminated by means of the differential equations at the times t j and tj+1 ,
leading to


Mxj+1 = Mxj + M ∆tẋj + ⎪


 ⎪

1     ⎪
2 ⎪
− β fj − Cẋj − Kxj + β fj+1 − Cẋj+1 − Kxj+1 ∆t ⎪ ⎪

2

Mẋj+1 = Mẋj + ⎪

  ⎪

     ⎪


1 − γ fj − Cẋj − Kxj + γ fj+1 − Cẋj+1 − Kxj+1 ∆t ⎪ ⎭

  
M + β∆t K
2
β∆t C
2
xj+1
=
γ∆t K M + γ∆t C ẋj+1
          2  
M − 12 − β ∆t2 K ∆tM − 12 − β ∆t2 C xj 1
− β ∆t β∆t2
fj
+ 2 ⇒
−(1 − γ)∆t K M − (1 − γ)∆t C ẋj (1 − γ)∆t γ∆t fj+1
34 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

zj+1 = D̄zj + Ej (2–11)

where

  ⎫
xj ⎪

zj = ⎪

ẋj ⎪



 −1       ⎪


M + β∆t K 2
β∆t C
2
M − 2 − β ∆t K ∆tM − 2 − β ∆t C
1 2 1 2
D̄ =
γ∆t K M + γ∆t C −(1 − γ)∆t K M − (1 − γ)∆t C ⎪



 −1      ⎪

M + β∆t K 2
β∆t C
2 1
− 2 2
f ⎪

Ej =
β ∆t β∆t j ⎪

2

γ∆t K M + γ∆t C (1 − γ)∆t γ∆t fj+1
(2–12)

D̄ denotes the so-called amplification matrix. The bar indicates that this is an approximation to
the exact amplification matrix, which has has been derived in Example 2.1.

Box 2.1: Newmark algorithm

Given the initial displacement vector x 0 and the initial velocity vector ẋ0 at the time t0 .
Calculate the initial acceleration vector ẍ0 from

Mẍ0 = f0 − Cẋ0 − Kx0

Repeat the following items for j = 0, 1, . . . , n

1. Calculate predictors for the new displacement and velocity vectors


1

x̄j+1 = xj + ẋj ∆t + − β ∆t2 ẍj


  2
x̄˙ j+1 = ẋj + 1 − γ ∆t ẍj

2. Calculate new acceleration vector from


M + γ∆tC + β∆t2 K ẍj+1 = fj+1 − Cx̄˙ j+1 − Kx̄j+1

3. Calculate new displacement and velocity vectors


xj+1 = x̄j+1 + β∆t2 ẍj+1
ẋj+1 = x̄˙ j+1 + γ∆t ẍj+1
2.1 Newmark Algorithm 35

Box 2.2: Derivation of (2-3) and (2-4)

Based on conventional integration theory the following identities may be formulated


 tj+1 ⎫
x(tj+1 ) = x(tj ) + ẋ(τ )dτ ⎪



tj
 tj+1 (2–13)


ẋ(tj+1 ) = ẋ(tj ) + ẍ(τ )dτ ⎪

tj

Integration by parts of the first relation provides

 tj+1 
 tj+1  
x(tj+1 ) = x(tj ) − tj+1 − τ ẋ(τ ) + tj+1 − τ ẍ(τ )dτ ⇒
tj tj
 tj+1  
xj+1 = xj + ∆t ẋtj + tj+1 − τ ẍ(τ )dτ (2–14)
tj

The indicated derivation is due to Krenk. 6 (2-14) may interpreted as a truncated Taylor ex-
pansion, where the integrals represent the remainder. Correspondingly, the 2nd equation
in (2-13) is written as

 tj+1
ẋj+1 = ẋj + ẍ(τ )dτ (2–15)
tj

Next, the integrals in (2-14) and (2-15) are represented by the following linear combina-
tions of the value of the acceleration vector at the end of the integration interval

   ⎫
tj+1  
− β ∆t ẍj + β∆t ẍj+1 ⎪
1
tj+1 − τ ẍ(τ )dτ 2 2 ⎪

tj 2 ⎬
 (2–16)
tj+1   ⎪

ẍ(τ )dτ 1 − γ ∆t ẍj + γ∆t ẍj+1 ⎪

tj

It is seen that the result in (2-16) becomes correct in case of constant acceleration, where
ẍ(τ ) ≡ ẍj = ẍj+1 . In any case the values of β and γ reflect the actual variation of the
acceleration during the interval. If ẍ(τ ) is assumed to be constant and equal to the mean
of the end-point values, one obtains (β, γ) = ( 14 , 12 ), whereas a linear variation between
the end-point values provides (β, γ) = ( 16 , 12 ).

The modal expansion (1-97) defines a one-to-one transformation from the physical to the modal
coordinates. Hence, the time integration may equally well be performed on the differential equa-
tions for the modal coordinate equations. It follows that the synthesized motion (1-97) becomes
36 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

numerical unstable, if the integration of just one of the modal coordinates render into instability.
Similarly, the accuracy of the synthesized motion is determined by the accuracy of those modal
coordinates, which are retained in the truncated modal expansion (1-101). On condition of the
modal decoupling condition (1-94) the time integration of the modal coordinates is reduced to
the integration of n decoupled single-degrees-of-freedom systems. Since the stability and accu-
racy analysis of a SDOF system can be performed analytically, the important role of the modal
decomposition assumption in the stability and accuracy analysis of numerical time integrators
becomes clear. In this respect let q(t) denote an arbitrary of the n modal coordinates, and ζ,
ω0 and F (t) the corresponding modal damping ratio, undamped circular eigenfrequency and
modal load. On condition that the eigenmodes have been normalized to unit modal mass, the
differential equation of the said modal coordinate reads

q̈(t) + 2ζω0q̇(t) + ω02 q(t) = F (t) (2–17)


The corresponding Newmark integration of (2-17) is given by (2-15), using M = [1], C =
[2ζω0], K = [ω02 ] and f(t) = [F (t)] in (2-16), resulting in the system matrices

  ⎫
q ⎪

zj = j ⎪


q̇j ⎪



 −1   ⎪

  1  2 ⎪

1 + β∆t2 ω02 2ζβω0∆t2 1 − 12 − β ω02 ∆t2 ∆t − 2 − β 2ζω0∆t ⎪ ⎪

D̄ = ⎪

γω02 ∆t 1 + 2ζγω0∆t −(1 − γ)ω02 ∆t 1 − (1 − γ)2ζω0∆t ⎪

  ⎪



D11 D12
=
D21 D22 ⎪



 −1    ⎪

 2 ⎪

1 + β∆t2 ω02 2ζβω0∆t2 1
− β ∆t β∆t2
F ⎪

Ej = 2 j ⎪



γω02 ∆t 1 + 2ζγω0∆t (1 − γ)∆t γ∆t Fj+1 ⎪

   ⎪



E E12 Fj ⎪

= 11 ⎪

E21 E22 Fj+1
(2–18)
where


1 + 2γζκ + (β − 12 )κ2 + (2β − γ)ζκ3 ⎪
D11 = ⎪

1 + 2γζκ + βκ2 ⎪





1 + (2γ − 1)ζκ + 2(2β − γ)ζ 2 κ2 ⎪

D12 = ∆t ⎪

1 + 2γζκ + βκ2
(2–19)
1 + 12 (2β − γ)κ2 κ2 ⎪

D21 =− ⎪

1 + 2γζκ + βκ2 ∆t ⎪





1 + 2(γ − 1)ζκ + (β − γ)κ2 − (2β − γ)ζκ3 ⎪

D22 = ⎭
1 + 2γζκ + βκ2
2.1 Newmark Algorithm 37


β − 12 + (2β − γ)ζκ 2 ⎪
E11 =− ∆t ⎪


1 + 2γζκ + βκ2 ⎪



β ⎪

E12 = ∆t2 ⎪

1 + 2γζκ + βκ 2
(2–20)
1 − γ + 12 (2β − γ)κ2 ⎪



E21 = ∆t ⎪

1 + 2γζκ + βκ 2




γ ⎪

E22 = 2
∆t
1 + 2γζκ + βκ

κ = ω0 ∆t (2–21)

Example 2.1: Exact single step multi value method

Assume that the initial time in the analytical solution (1-29) is chosen as the time t j = j∆t. Then, the initial value
is changed into z(t j ) = zj , and the solution is modified to

&  '
t
A(t−tj ) −A(τ −tj )
z(t) = e zj + e F(τ )dτ , t > tj (2–22)
tj

Next, (2-22) is considered at the following time t j+1 = tj + ∆t, which leads to following integration algorithm

&  '
tj+1
A∆t −A(τ −tj )
zj+1 = e zj + e F(τ )dτ = Dzj + Ej (2–23)
tj

where

 
x
zj = j (2–24)
ẋj

D = eA∆t (2–25)
 tj+1 


Ej = eA∆t e−A(τ −tj ) F(τ )dτ eA∆t 1 − α e−A·0 Fj + αe−A∆t Fj+1 ∆t =
tj
 

1 − α e−A∆t Fj + αFj+1 ∆t (2–26)

In (2-26) the integral has been evaluated by a generalized trapezoidal rule defined by the parameter α ∈ [0, 1] in
terms of a weighted average of the values e −A·0 Fj and e−A∆t Fj+1 at the ends of the integration interval [t j , tn+1 ].
D denotes the exact amplification matrix. Correspondingly, the evolutionary equation (2-11) determines the exact
solution for the state vector to the accuracy of the approximation (2-26) for the load vector.

The matrix exponential functions entering (2-25) and (2-26) are given by, cf. (1-55)


eA∆t = ΨeΛA ∆t Ψ−1
(2–27)
e−A∆t = ΨeΛA ∆t Ψ−1
38 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

The modal matrix Ψ contains the eigenmodes of the matrix D, which are identical to the eigenmodes of the matrix
A. The diagonal matrix e ΛA ∆t stores the eigenvalues λ D,j = eλA,j ∆t of D in the main diagonal. λ A,j denotes the
corresponding eigenvalue of A, which may be written in the form 7

λA,j = ωj − ζj + i 1 − ζj2 (2–28)

ωj and ζj defines the equivalent undamped circular eigenfrequency and damping ratio in case of damped of damped
eigenvibration in the jth mode. These definitions correspond to the conventional definition of these quantities in
case of the modal decoupling condition (1-94). 7 From (2-28) follows, that the modal damping ratio is related to
the magnitude of the eigenvalues of D as

|λD,j | = e−ζj ωj ∆t ⇒
ln |λD,j |
ζj = − (2–29)
ωj ∆t

The damped circular eigenfrequency ω d,j is related to the eigenvalues of the amplification matrix as follows

( 1 1 
 θ
D,j
ωd,j = ωj 1 − ζj2 = Im λA,j = Im ln λD,j = Im ln |λD,j |eiθD,j = (2–30)
∆t ∆t ∆t
where θD,j denotes the argument of λ D,j .

Example 2.2: Displacement difference equation form of Newmark algorithm

In this example an implementation of the Newmark algorithms will be derived, where the solution for the unknown
displacement vector x j+1 at the time tj+1 is determined as a function of the previous known displacement vectors
xj and xj−1 , as well as the load vectors f j+1 , fj and fj−1 . At first, (2-2) is formulated at the times t j+1 , tj and
tj−1 as follows

Mẍj+1 + Cẋj+1 + Kxj+1 = fj+1 ⎪

Mẍj + Cẋj + Kxj = fj (2–31)


Mẍj−1 + Cẋj−1 + Kxj−1 = fj−1
 
The first equation in (2-29) is multiplied
1 withβ∆t 2 , the 2nd equation is multiplied with 12 − 2β + γ ∆t2 , and the
third equation is multiplied with 2 + β − γ ∆t2 . Finally, the resulting equations are added, leading to

)     *
1 1
∆t2 M β ẍj+1 − 2β ẍj + β ẍj−1 + + γ ẍj + − γ ẍj−1 +
2 2
)     *
1 1
∆t C β ẋj+1 − 2β ẋj + β ẋj−1 +
2
+ γ ẋj + − γ ẋj−1 +
2 2
)     *
1 1
∆t2 K βxj+1 − 2βxj + βxj−1 + + γ xj + − γ xj−1 =
2 2
   
1 1
β∆t2 fj+1 + − 2β + γ ∆t2 fj + + β − γ ∆t2 fj−1 (2–32)
2 2

Next, (2-4) and (2-5) are formulated at the time t j+1 and tj as follows
7
S.R.K. Nielsen: Structural Dynamics, Vol. 1. Linear Structural Dynamics, 4th Ed. Aalborg tekniske Univer-
sitetsforlag, 2004.
2.1 Newmark Algorithm 39

  ⎫
1 ⎪

xj+1 = xj + ∆t ẋj + − β ∆t2 ẍj + β∆t2 ẍj+1 ⎪

2
  (2–33)
1 ⎪

xj = xj−1 + ∆t ẋj−1 + − β ∆t2 ẍj−1 + β∆t2 ẍj ⎪

2
  
ẋj+1 = ẋj + 1 − γ ∆t ẍj + γ∆t ẍj+1
  (2–34)
ẋj = ẋj−1 + 1 − γ ∆t ẍj−1 + γ∆t ẍj
Withdrawal of the last equations in (2-33) and (2-34) from the the first equations provides the identities


  1  
β∆t2 ẍj+1 − 2ẍj + ẍj−1 = xj+1 − 2xj + xj−1 − ∆t ẋj − ẋj−1 − ∆t2 ẍj − ẍj−1 (2–35)

  2
γ∆t ẍj+1 − 2ẍj + ẍj−1 = ẋj+1 − 2ẋj + ẋj−1 − ∆t ẍj − ẍj−1 (2–36)

Next, ( 12 + γ)∆t2 ẍj + 12 − γ)∆t2 ẍj−1 is added on both sides of (2-35), and (2-3) is used on the resulting right
hand side, which provides


1  
1

∆t β ẍj+1 − 2β ẍj + β ẍj−1 +
2
+ γ ∆t ẍj +
2
− γ ∆t2 ẍj−1 =
2 2
   
  1 2  1 1
xj+1 − 2xj + xj−1 − ∆t ẋj − ẋj−1 − ∆t ẍj − ẍj−1 + + γ ∆t ẍj +
2
− γ ∆t2 ẍj−1 =
2 2 2
xj+1 − 2xj + xj−1 (2–37)
(2-36) is solved for the velocity terms on the right hand side, and the resulting equation is multiplied with β∆t 2 .
Next, ( 12 + γ)∆t2 ẋj + 12 − γ)∆t2 ẋj−1 is added on both sides of the equation, resulting in


1  
1

∆t β ẋj+1 − 2β ẋj + β ẋj−1 +
2
+ γ ∆t ẋj +
2
− γ ∆t2 ẋj−1 =
2 2

   
  1 1
βγ∆t ẍj+1 − 2ẍj + ẍj−1 + β∆t ẍj − ẍj−1 +
3 3
+ γ ∆t ẋj +
2
− γ ∆t2 ẋj−1 =
2 2


1
 
γ∆t xj+1 − 2xj + xj−1 − γ∆t2 ẋj − ẋj−1 + β − γ ∆t3 ẍj − ẍj−1 +
    2
1 1

+ γ ∆t ẋj +
2
− γ ∆t ẋj−1 = γ∆t xj+1 − 2xj + xj−1 + ∆t xj − xj−1
2
(2–38)
2 2
The 3rd line in (2-38) follows from the 2nd line by eliminating the term βγ∆t 3 (ẍj+1 − 2ẍj + ẍj−1 ) by means of
(2-35). The final result is based on the following identity, which is obtained by a multiplication of the last equation
in (2-33) with ∆t, and the last equation in (2-34) with 12 ∆t2 , following by a withdrawal of the resulting equations

1
    1  
β − γ ∆t3 ẍj − ẍj−1 = ∆t xj − xj−1 − ∆t2 ẋj + ẋj−1 (2–39)
2 2
(2-37) and (2-38) are inserted into (2-32). After grouping terms with common multipliers x j+1 , xj , xj−1 the
following final result is obtained

  )   *
  1
M + γ∆t C + β∆t2 K xj+1 − 2M − 1 − 2γ ∆t C − − 2β + γ ∆t2 K xj +
2
)   *
  1
M − 1 − γ ∆t C + + β − γ ∆t2 K xj−1 =
2
   
1 1
β∆t2 fj+1 + − 2β + γ ∆t2 fj + + β − γ ∆t2 fj−1 (2–40)
2 2
40 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

(2-40) represents the so-called displacement difference equation form of the Newmark algorithm, which constitutes
a multi step single value formulation of the method. At the calculation of x 1 the previous solution x 0 is given as
one of the initial value vectors, whereas x −1 is unknown. Hence, the algorithm has a starting problem. Instead, x 1
is calculated by the standard implementation given in Box 2.1, before the algorithm (2-40) is used for j ≥ 1.

Next, consider the special case of β = 0 and γ = 12 . Then, (2-40) reduces to

 1     1 
M + ∆t C xj+1 − 2M − ∆t2 K xj + M − ∆t C xj−1 = ∆t2 fj (2–41)
2 2
Consider the central difference 1 approximations to the acceleration and the velocity vectors
xj+1 − 2xj + xj−1 xj+1 − xj−1
ẍj , ẋj (2–42)
∆t2 2∆t
(2-41) is obtained, if the finite difference approximations (2-42) are inserted into the middlemost equation of (2-
31). Hence, the central difference solution to (1-1) constitutes a special case of the Newmark family corresponding
to the parameters (β, γ) = (0, 12 ).
The central difference algorithm is only conditional stable. However, if M and C are diagonal it provides an
explicit solution for x j+1 , which makes it highly economical. In cases where the time step is controlled by
accuracy rather than stability, which is often the case in many wave propagation problems, the central difference
algorithm is widely used.

Example 2.3: Crank-Nicolson algorithm


For (β, γ) = ( 14 , 12 ), eqs. (2-11), (2-12) may be written

  
M + 14 ∆t2 K 4 ∆t C
1 2
xj+1
=
2 ∆t K M + 12 ∆t C ẋj+1
1
     
M − 14 ∆t2 K ∆t M − 14 ∆t2 C xj 1
∆t2 1
4 ∆t
2
fj
+ 41 (2–43)
− 21 ∆t K M − 2 ∆t C
1
ẋj 2 ∆t
1
2 ∆t fj+1

The 2nd equation in (2-43) is multiplied with 12 ∆t, and is withdraw from the 1st equation, resulting in

        
M − 12 ∆t M xj+1 M 2 ∆t M
1
xj 0 0 fj
= + 1 (2–44)
1
2 ∆t K M + 1
2 ∆t C ẋj+1 − 2 ∆t K
1
M − 12 ∆t C ẋj 2 ∆t 1
2 ∆t fj+1

The Crank-Nicolson algorithm is single step multi value method, where the amplification matrix in (2-11) is given
as

D̄ = D−1
1 D2 (2–45)

where, cf. (1-27)


    
1 I − 12 ∆t I M−1 0 M − 21 ∆t M
D1 = I − ∆tA = −1
= (2–46)
2 2 ∆t M
1
K I + 12 ∆t M−1 C 0 M−1 2 ∆t K
1
M + 12 ∆t C
    
1 I 2 ∆t I
1
M−1 0 M 2 ∆t M
1
D2 = I + ∆tA = =
2 − 2 ∆t M−1 K
1
I − 12 ∆t M−1 C 0 M−1 − 2 ∆t K
1
M − 12 ∆t C
(2–47)
2.1 Newmark Algorithm 41

A systematic derivation of the Crank-Nicolson algorithm along with other high-accuracy methods will be given in
Example 2.x.

Insertion of (2-46) and (2-47) into (2-45) provides

 −1  −1   
M − 21 ∆t M M−1 0 M−1 0 M 2 ∆t M
1
D̄ = 1
2 ∆t K M + 12 ∆t C 0 M−1 0 M−1 − 12 ∆t K M − 12 ∆t C
 −1  
M − 21 ∆t M M 1
2 ∆t M
= 1 (2–48)
2 ∆t K M + 12 ∆t C − 21 ∆t K M − 12 ∆t C

The amplification (2-48) is identical to the one obtained from (2-44). From this is concluded that the Crank-
Nicolson algorithm constitutes a special case of the Newmark family corresponding to the parameters (β, γ) =
( 14 , 12 ). As mentioned in Box 2.2 this parameter combination is obtained in case of a constant variation of the
acceleration during the interval [t j , tj+1 ] given by ẍ(t) ≡ 12 (ẍj + ẍj+1 ).

2.1.1 Numerical accuracy


The approximate Newmark algorithm develops after (2-11), whereas the exact development
is given by (2-23). Assume that the load vectors E j in (2-11) and (2-23) are identical. On
condition of an identical state vector z j at the time tj , the deviation between the Newmark
solution z̄j+1 and the exact solution zj+1 at the succeeding time tj+1 is given by

zj+1 − z̄j+1 = D − D̄ zj = ε (2–49)

The error vector ε depends on the difference D − D̄, which in term depends on the magnitude
of the time step ∆t. Assume that the error vector has the form

+  +
τl = +ε ∆tk+1 + (2–50)

,  ,
τ = ,O ∆tk+1 , (2–51)

D = ΨΛD Ψ−1 (2–52)

D̄ = Ψ̄Λ̄D Ψ̄−1 (2–53)

ΛD = eΛA ∆t (2–54)
42 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

Λ̄D = eΛ̄A ∆t (2–55)

θ̄D
ω̄ = (2–56)
∆t

ln |λ̄D |
ζ̄0 = − (2–57)
ω̄∆t

2.1.2 Numerical stability


Given  xj−1 and xj are known exactly, (2-5) determines xj+1 with an error of the magnitude
 2that
O ∆t . Hence, at each calculation step new errors are introduced in the solution. As is the
case for the initial conditions or any other external disturbance, errors introduced at a certain
instant of time will be dissipated by the numerical scheme. However, if the errors are introduced
at a rate and magnitude overruling the dissipating capability of algorithm, the obtained results
will eventually oscillate with larger and larger amplitudes. We say that the algorithm becomes
numerical unstable. Typically, stability problems may be cured by selecting a smaller time step.
In order to identify the mechanism trigging the numerical instability, the generic form (2-5) is
iterated to obtain the following sequence of results


z1 = Dz0 + E0 ⎪



z2 = Dz1 + E1 = D2 z0 + DE0 + E1 ⎪





z3 = Dz2 + E2 = D3 z0 + D2 E0 + DE1 + E2 ⎬
..
. ⎪



j
% ⎪



zj+1 j+1 j j−1 j+1
= Dzj + Ej = D z0 + D E0 + D E1 + · · · + DEj−1 + Ej = D z0 + D j−m
Em ⎪


m=0
(2–58)

Additionally, the eigenvalue problem related to the transfer matrix D is considered. In the fol-
lowing the kth eigen-pair of the transfer matrix D and the system matrix A given by (1-27) are
(k) (k)
denoted as (λk,D , ΨD ) and (λk,A , ΨA ), respectively. Although in some cases the eigenvectors
of D and A are identical, as for the Crank-Nicolson algorithm described in Example 2.2, the
eigenvectors of the two matrices will in general be different and may even be of different di-
(k) (k)
mension. Nevertheless, the kth eigenvectors Ψ D and ΨA describes the same eigenmode of the
system. Similarly, λk,D is an approximation to the kth eigenvalue λ k,A . The eigenvalues λk,D
are stored in the diagonal matrix ΛD , whereas the eigenvectors Ψ(k) are stored column-wise in
the modal matrix ΨD . Similar to (1-38) the eigenvalue problems may represented in the matrix
form
2.1 Newmark Algorithm 43

DΨD = ΨD ΛD (2–59)

From (2-39) follows, cf. (1-41), (1-52)

D = ΨD ΛD Ψ−1
D ⇒ Dj = ΨD ΛjD Ψ−1
D (2–60)

ΛjD signifies a diagonal matrix with the quantities λ jk,D in the main diagonal. Insertion of (2-40)
into (2-38) provides the relation

j
%
qj+1 = Λj+1
D q0 + Λj−m
D em (2–61)
m=0

where

qj = Ψ−1
D zj , em = Ψ−1
D Em (2–62)

All matrices entering (2-41) are diagonal. Hence, the relation among the kth component of the
2n-dimensional vectors qj+1 , q0 and em become

j
%
qk,j+1 = λj+1
k,D qk,0 + λj−m
k,D ek,m , k = 1, . . . , N (2–63)
m=0

where N is the dimension of the state vector. Next, using the triangle inequality the following
upper bound for the right hand side of (2-43) may be formulated

+ +
+ + ++ j+1 j
% + +
+ + +j+1 + + % j
+ + + +
+qk,j+1+ = +λ qk,0 + j−m
λk,D ek,m + ≤ λk,D + + +
qk,0 + +λk,D +j−m +ek,m + (2–64)
k,D
+ +
m=0 m=0

Let,

+ +
ek,max = max +ek,m + (2–65)
m=1, 2, ...

Then, the upper bound in (2-44) may be further elaborated as

j
+ + + + + + % + +
+qk,j+1+ ≤ +λk,D +j+1 +qk,0 + + ek,max +λk,D +j−m =
m=0
+ +
+ +j+1+ + 1 − +λk,D +j+1
+λk,D + +qk,0 + + ek,max + + (2–66)
1 − +λk,D +
44 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

In the last statement the following well-known sum formula has been used

1 − xj+1
1 + x + x2 + · · · + xj = (2–67)
1−x
+ +
The right hand side of (2-46), and hence +yk,j+1+, remains finite as j → ∞, if

+ +
+λk,D + < 1 , k = 1, . . . , N (2–68)

(2-48) is a sufficient condition for stability of the numerical integration algorithm. From (2-41)
follows that the stability of the algorithm requires that all the diagonal matrices Λ j+1
D and ΛD
j−m

remain finite as as j → ∞. This is the case if

+ +
+λk,D + ≤ 1 , k = 1, . . . , N (2–69)

(2-49) is a necessary condition for


+ stability
+ of the algorithm. The sufficiency of the algorithm,
+ +
if one or more eigenvalues fulfill λk,D = 1 is not guaranteed.

In conclusion it is shown that the stability analysis of the numerical integration algorithm can be
reduced to an eigenvalue analysis of the transfer matrix of the related state vector formulation.

Next, consider the expansion in modal coordinates (1-95). Due to the one-to-one correspon-
dence the solution for the displacement x j = x(j∆t) will be of finite length as j → ∞, if and
only if the modal coordinate vector qj = q(j∆t) also remains of finite length. Assume that
the modal decoupling condition (1-92) is valid. Then, the stability of the numerical integration
algorithm can be investigated by checking the stability of the same numerical integration algo-
rithm applied to the sequence of SDOF differential equations (1-98). In this way the stability
of the central difference method has been investigated in Example 2.5. From this observation is
concluded that the stability is determined by the highest frequency mode in the expansion. As
a consequence truncated modal expansions such as (1-99) will have better numerical stability
properties than the full expansion (1-95). This observation is often formulated by the phrase
that filtration of the high-frequency modes from the response improves the numerical stability.

A numerical integration algorithm, which is stable for arbitrary length of the time step ∆t is
denoted unconditional stable. The Crank-Nicolson algorithm is unconditional stable as shown
in Example 2.4. If stability is only achieved for time steps below a certain critical length the
algorithm is referred to as conditional stable. The central difference algorithm is conditional
stable as shown in Example 2.5 below. Generally, the length of the time step should be deter-
mined by accuracy requirements. Hence, conditional stable algorithms should be avoided in
cases where the time step is determined by stability requirements.
2.1 Newmark Algorithm 45

2.1.3 Period Distortion


Consider damped eigenvibrations of the system (1-1), i.e. the dynamic load vector f(t) ≡ 0. In
this case the iterated solution in modal coordinates (2-43) reduces to

qk,j+1 = λj+1
k,D qk,0 (2–70)

Let the eigenvalues be written in the polar notation

+ +
λk,D = +λk,D +ei θk,D ⇒
+ +j + +j  
λjk,D = +λk,D + ei jθk,D = +λk,D + cos jθk,D + i sin jθk,D (2–71)

where θk,D signifies the argument of the kth eigenvalue. The last multiplier on the right-hand
+side specifies
+ the harmonic varying part of the kth damped eigenvibration, whereas the factor
+λk,D +j determines the diminish of the amplitude due to energy dissipation.

Assume that the harmonic varying part of the numerical solution repeat itself after J time steps.
Then, Jθk,D = 2π. Hence, the damped eigenperiod in the kth mode predicted by the numerical
algorithm becomes equal to

∆t
Tk,d,D = J∆t = 2π (2–72)
θk,D
In order to calculate the corresponding damped eigenperiod of the system (1-26) the eigensolu-
tion (1-34) is written on the form

 
z(t) = ΨA eλk,A t = ΨA e−µk,A t cos νk,A t + i sin νk,A t
(k) (k)
(2–73)

where the representation (2-51) for the eigenvalues λj,A has been used. From (2-61) follows
that the analytical solution for the damped eigenperiod in the kth mode becomes

Tk,d,A = (2–74)
νk,A
Quite often it is seen that the numerical and analytical damped eigenperiod differ. This phe-
nomenon is denoted period distortion. As a measure of the period distortion the following
non-dimensional ratio is introduced

Tk,d,D − Tk,d,A
τk = (2–75)
Tk,d,A
τk indicates the period distortion per period. If τ k > 0 we talk of period elongation, whereas
τk < 0 signifies period shortening. If the algorithm is related with period distortion the nu-
merical and analytical eigensolutions become more and more in disagreement, and may at a
46 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

sufficient large elapsed time even become in counter phase. Similar to numerical stability prob-
lems the period distortion may be cured by reducing the time step. Since the structural response
is determined by the low frequency modes only these modes need to be adjusted for period
distortion. Hence, the requirement to the time step for elimination of period distortion is not as
restrictive as for the fulfillment of the stability requirement for conditional stable numerical al-
gorithms. For undamped or lightly damped systems shown in Example 2.5 the Crank-Nicolson
algorithm has period elongation, whereas the central difference algorithm has period shortening
as demonstrated in Examples 2.7 and 2.8 below.

A numerical example demonstrating the period shortening of the central difference method has
been given in Example 2.9 below.

2.1.4 Numerical Damping


Damped eigenvibrations in the kth mode is given by (2-61). At the time t = j∆t the analytical
solution for the kth modal coordinate may be written

 
qj,k,A = q0,k,A eλk,A j∆t = q0,k,A e−µk,A j∆t cos νk,A j∆t + i sin νk,A j∆t (2–76)

The corresponding numerical solution is given by (2-41)

qj,k,D = λjk,D q0,k,D (2–77)

As a measure of the dissipation of the numerical solution relative to the analytical solution the
following fraction is considered

+ + + +
+qj,k,D + +q0,k,D +
+ + j+ +
+qj,k,A+ = a +q0,k,A + (2–78)

where

+ +
+λk,D +
a= (2–79)
e−µk,A ∆t
Quite often the decrease of the numerical solution for the damped eigenvibrations in the kth
mode relative to the analytical solution is determined by the fraction a. If a < 1 the numerical
solution will decrease faster then the numerical solution. We say that the numerical algorithm
is related with positive numerical damping in the kth mode. On the other hand, if a > 1, the
numerical solution will decrease at a slower rate than the analytical solutions, for which reason
the numerical algorithm is said to be related with negative numerical damping.
2.1 Newmark Algorithm 47

Occationally, numerical damping is investigated by checking whether the algorithm predicts


dissipation in the undamped case, where C = 0. Then, µ k,A = 0, and the criteria a < 1 for
numerical damping reduces to

+ +
+λk,D + < 1 (2–80)

where the eigenvalue λk,D is calculated for a transfer matrix D of an undamped structural sys-
tem.

As we shall see the two criteria are not unique. As shown in Examples 2.8 and 2.9 both the
Crank-Nicolson and the central difference algorithms are free of numerical damping according
to the criteria (2-76). Nevertheless it turns out that a > 1 for the Crank-Nicolson algorithm cor-
responding to negative numerical damping, whereas a < 1 for the central difference algorithm
corresponding to positive numerical damping. As demonstrated in Examples 2.8 and 2.9, the
deviation of the two criteria is of the magnitude O(∆t 3 ). a depends linearly on the damping
ratio for the Crank-Nicolson algorithm, whereas the dependence is cubic for the central differ-
ence algorithm. For this reason the discrepancy between the two criteria for numerical damping
is most pronounced for the Crank-Nicolson algorithm as will be demonstrated in the numerical
examples.

Positive numerical damping is a useful property, since it tends to stabilize the numerical integra-
tion of the high frequency modes. Actually, the numerically damping tends to reduce or filter
these modes out of the response. Of course, the time step should be adjusted so the damping
effect is not visible in the low frequency modes determining the global response.

Example 2.4: Numerical damping of the central difference algorithm and Crank-Nicolson algorithms for a
48 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

1
1.5
0.8

0.6 1

0.4

qk (t)/e−ζk ωk t
0.5
0.2
qk (t)

0 0

−0.2
−0.5
−0.4

−0.6 −1

−0.8
−1.5
−1

−2
0 1 2 3 4 5 0 1 2 3 4 5
t/Tk t/Tk

Fig. 2–1 Crank-Nicolson algorithm. Damped eigen- Fig. 2–2 Negative numerical damping of Crank-
vibrations of SDOF oscillator. ω k = 1, ζk = 0.01, Nicolson algorithm. Normalized damped eigenvi-
(qk,0 , q̇k,0 ) = (1, 0). —-: Analytical solution, - - -: brations of SDOF oscillator. ω k = 1, ζk = 0.1,
∆t ∆t ∆t 1.05
Tk =0.1, -.-.-: Tk = 0.2, .....: Tk = π . (qk,0 , q̇k,0 ) = (1, 0), ∆t
Tk = 0.2.

Fig. 2-3 shows the numerical results for eigenvibrations of a SDOF oscillator with ω k = 1 and ζk = 0.1 obtained
by the Crank-Nicolson algorithm with the time steps ∆t 1.05
Tk = 0.1, 0.2, π . The solutions remain numerical stable
even at very large time steps as predicted by (2-53). Further, all numerical solutions are related with period elon-
gation, which increases proportional with ∆t 2 as indicated by (2-68).

No numerical damping is visible in the results shown on Fig. 2-3. In order to investigate this problem further,
the numerical results are normalized with the amplitude e −ζk ωk t of the analytical solution. The results have been
shown on Fig. 2-4 for the relatively large damping ratio ζ k = 0.1 and the time step ∆t Tk = 0.2. As seen the
numerical solution is dissipated at a slower rate than the analytical solution, verifying the negative numerical
damping effect predicted by (2-84).
2.2 Generalized Alpha Algorithm 49

1 1

0.8 0.8

0.6 0.6

0.4 0.4

qk (t)/e−ζk ωk t
0.2 0.2
qk (t)

0 0

−0.2 −0.2

−0.4 −0.4

−0.6 −0.6

−0.8 −0.8

−1 −1
0 1 2 3 4 5 0 10 20 30 40 50
t/Tk t/Tk

Fig. 2–3 Central difference algorithm. Damped Fig. 2–4 Positive numerical damping of central dif-
eigenvibrations of SDOF oscillator. ω k = 1, ζk = ference algorithm. Normalized damped eigenvibra-
0.01, (qk,0 , q̇k,0 ) = (1, 0). —-: Analytical solution, - tions of SDOF oscillator. ω k = 1, ζk = 0.1,
- -: ∆t ∆t ∆t 1.05
Tk =0.1, -.-.-: Tk = 0.2, .....: Tk = π . (qk,0 , q̇k,0 ) = (1, 0), ∆t
Tk = 0.2.

Fig. 2-5 shows the obtained numerical results. The solutions are numerical stable at the time steps ∆t
Tk =0.1, 0.2,
∆t 1.05
and unstable at the time step Tk = π as predicted by (2-57), demonstrating the conditional stability of the
central difference algorithm. As shown on Fig. 2-3 the Crank-Nicolson algorithm remain stable at the time step
∆t 1.05 2
Tk = π . All numerical solutions are related with period shortening, which increases proportional with ∆t as
predicted by (2-71).

No numerical damping is visible in the results shown on Fig. 2-5. In order to highlight this problem, the numerical
solution is normalized with respect to the envelope function of the analytical solution, e −ζk ωk t . The results have
been shown on Fig. 2-6 for the damping ratio ζ k = 0.1 and time step ∆t Tk = 0.2. As seen the numerical solution
is dissipated at a slower rate than the analytical solution, verifying the positive numerical damping effect predicted
by (2-84). However, the numerical damping effect is much weaker than on Fig. 2-4 due to the dependence on ζ k3
in (2-88), versus the linear dependence on the damping ration in (2-84).

2.2 Generalized Alpha Algorithm

Box 2.3: Derivation of alpha algorithm

Historical review:
Newmark algorithms,8
8
N.M. Newmark: A Method of Computation for Structural Dynamics. J.Eng.Mech., ASCE, 85(EM3), 1959,
67-94.
50 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

Numerical damping is introduced in the Newmark method by chosen the parameter γ > 12 .
However, this will introduce numerical damping also in the lower modes,and hence compromise
the accuracy. Methods to improve upon this situation are the socalled collocation methods, first
introduced by Wilson9, and improved by Hilber and Hughes10
α method of Hilber, Hughes and Taylor11 α-modification of damping, stiffness and external
forces.
α method of Wood, Bossak and Zienkiewicz12 α-modification of inertial forces alone.
Generalized α-method of Chung and Hulbert13 α-modification of inertial terms and of damping,
stiffness and external forces with different α parameters.

Example 2.5: Numerical integration of SDOF oscillator by generalized alpha algorithms

Example 2.6: Unconditional stable high accuracy algorithms


The considered algorithms are based on the following factorization of the exact amplification matrix (2-25)

∆t ∆t  ∆t −1 ∆t
D = eA∆t = eA(1+α) 2 eA(1−α) 2 = e−A(1+α) 2 eA(1−α) 2 = D−1
1 D2 (2–81)
where α is a number in the interval ]0, 1[. Additionally, (1-57) has been used with t = −(1 + α) ∆t
2 . From (1-53)
follows


1+α (1 + α)2 2 2 (1 + α)3 3 3 (1 + α)4 4 4 ⎪
D1 = e −A(1+α) ∆t
2 =I− ∆tA + ∆t A − ∆t A + ∆t A + · · · ⎪

2 8 48 384
1−α (1 − α)2 2 2 (1 − α)3 3 3 (1 − α)4 4 4 ⎪
∆t A + · · · ⎪
∆t
D2 = eA(1−α) 2 =I+ ∆tA + ∆t A + ∆t A + ⎭
2 8 48 384
(2–82)
From (1-55) follows, that the matrices D 1 and D2 alternatively may be written as

D1 = ΨΛD1 Ψ−1
(2–83)
D2 = ΨΛD2 Ψ−1
ΛD1 and ΛD2 are diagonal matrices, storing the eigenvalues of D 1 and D2 in the main diagonal. These are given
as

1+α (1 + α)2 2 2 (1 + α)3 3 3 (1 + α)4 4 4 ⎪
ΛD1 =I− ∆tΛA + ∆t ΛA − ∆t ΛA + ∆t ΛA + · · · ⎪

2 8 48 384 (2–84)
1−α (1 − α)2 2 2 (1 − α)3 3 3 (1 − α)4 4 4 ⎪
ΛD2 =I+ ∆tΛA + ∆t ΛA + ∆t ΛA + ∆t ΛA + · · · ⎪

2 8 48 384
9
E.L. Wilson: A Computer Program for the Dynamic Stress Analysis of Underground Structures. SESM Report
No- 68-1, Division of Structural Engineering and Structural Mechanics, University of Califirnia, Berkeley, 1968.
10
H.H. Hilber and T.J.R. Hughes and R.L. Taylor: Collocation, Dissipation and ”Overshoot” for Time Integra-
tion Schemes in Structural Dynamics. Earthquake Engineering and Structural Dynamics, 6, 1978, 99-118.
11
H.H. Hilber, T.J.R. Hughes and R.L. Taylor: Improved Numerical Dissipation for Time Integration in Struc-
tural Dynamics. Earthquake Engineering and Structural Dynamics, 5, 1977, 283-292.
12
W.L. Wood, M. Bossak and O.C. Zienkiewicz: An Alpha Modification of Newmarks Method. International
Journal for Numerical Methods in Engineering, 15, 1981, 1562-1566.
13
J. Chung and G.M. Hulbert: A time Integration Algorithm for Structural Dynamics with Improved Numerical
Dissipation: The Generalized α Method. Journal of Applied Mechanics, 60, 1993, 371-375.
2.2 Generalized Alpha Algorithm 51

Ψ is a modal matrix containing the eigenvectors of D 1 and D2 stored column-wise. It is remarkable that these
eigenvectors without any approximation are equal to the eigenvectors of the matrix of A, independently of the
number of terms retained in the expansion (2-94). Hence, the local truncation error is entirely related to the
truncation errors of these expansion.
52 Chapter 2 – NUMERICAL INTEGRATION OF EQUATIONS OF MOTION

2.3 Exercises
2.1 Consider the damped eigenvibrations of the two-degrees-of-freedom system defined in Ex-
ample 1.2 subjected to the initial values
x1 (0) = 0.01 m , x2 (0) = ẋ1 (0) = ẋ2 (0) = 0
(a.) Write a MATLAB program, which perform Newmark integration.
(b.) Perform and compare the calculation for (β, γ) = (0.25, 0.50), (β, γ) = (0.25, 0.25)
with the time-step h = T1 /10 and h = T1 /100, where T1 denotes the fundamental
undamped eigenperiod.

2.2 Consider the same problem as in Exercise 2.1.


(a.) Write a MATLAB program, which perform generalized alpha integration.
(b.) Perform and compare the calculation for |λ∞ | = 0.8, |λ∞ | = 1.0 with the time-step
h = T1 /10 and h = T1 /100, where T1 denotes the fundamental undamped eigenpe-
riod.
C HAPTER 3
LINEAR EIGENVALUE PROBLEMS

In this chapter the generalized eigenvalue problem (1-9), and the related characteristic polyno-
mial will be further analyzed. At first in Section 3.1, the Gauss factorization of the coefficient
matrix of the generalized eigenvalue problem is treated. This factorization plays an important
role in several iterative numerical eigenvalue solvers based on the characteristic polynomial.
Furthermore, the factorization provides a simple way for calculating a sequence of characteris-
tic polynomials known as the Sturm sequence, which makes it possible to formulate upper and
lower bounds of the eigenvalues of the problem. These bounds are contained in the so-called
eigenvalue separation principle treated in Section 3.2. Various iterative schemes for solving the
indicated generalized eigenvalue problem require that the stiffness matrix is non-singular. For
structures, which admit stiff-body motions, it then becomes necessary to perform a so-called
shift, where an artificial non-singular stiffness matrix is introduced. The eigenvectors of the
shifted system are identical to those of the original problem, whereas the eigenvalues of the
two systems deviate with the specified shift parameter. Shifting of a generalized eigenvalue
problem is treated in Section 3.3. Some iterative eigensolvers presume a special eigenvalue
problem corresponding to M = I in (1-9). In this case an introductory transformation from
the anticipated generalized eigenvalue problem into an equivalent special eigenvalue problem
becomes necessary. This can be achieved in several ways. In Section 3.4 a so-called similarity
transformation has been used, which preserves the symmetry of the transformed stiffness ma-
trix. A similarity transformation leave the eigenvalues unaffected, whereas the eigenvectors are
changed in a known manner.

3.1 Gauss Factorization of Characteristic Polynomials


Since the coefficient matrix of the generalized eigenvalue problem K − λM is symmetric, it
may be Gauss factorized on the form

K − λM = LDLT (3–1)

where L is a lower triangular matrix with units in the main diagonal, and D is a diagonal matrix,
given as

— 53 —
54 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

⎡ ⎤
1
⎢ ⎥
⎢ l21 1 ⎥
⎢ ⎥
L=⎢
⎢ l31 l32 1

⎥ (3–2)
⎢ .. .. .. . . ⎥
⎣ . . . . ⎦
ln1 ln2 ln3 · · · 1

⎡ ⎤
d11 0 ··· 0
⎢0 d ··· ⎥
⎢ 22 0 ⎥
D=⎢ . .. .. .. ⎥ (3–3)
⎣ .. . . . ⎦
0 0 · · · dnn

The details of the Gauss factorization of an symmetric matrix has been given in Box. 3.1. Since,
det(L) = det(LT ) = 1, the following representation of the characteristic polynomial (1-10) is
obtained

         
P (λ) = det LDLT = det L det D det LT = det D = d11 d22 · · · dnn (3–4)

At the same time (1-11) can be written on the form

    
P (λ) = a0 λ − λ1 λ − λ2 · · · λ − λn (3–5)

 similarity between (3-4) and (3-5), d ii is very different from the correspond-
Despite the striking
ing factor λ − λi in (3-5), as demonstrated in Example 3.2 below.

Let λ be monotonously increased in the interval [0, ∞[. From (1-10) follows that P (0) =
an = det(K) ≥ 0. Since all factors in (3-5) have negative sign for λ ∈]0,  λ1 [, itfollows that
P (λ) > 0 throughout this interval. As λ passes λ 1 from below, the factor λ − λ1 changes its
sign from negative to positive, while the other factors remain negative. This means that the sign
of P (λ) changes from positive to negative at the passage of λ 1 . Then, a similar sign change
must occur in (3-4). For λ < λ1 all the diagonal elements d11 , d22 , . . . , dnn are positive. As
λ passes λ1 , exactly one of these factors (not necessarily d11 ) changes its sign from positive
to negative, while the other factors remain positive. The sign of the characteristic
  polynomial
remain negative for λ ∈]λ1 , λ2 [. As λ passes λ2 from below, the factor λ − λ2 in the same
way changes its sign from negative to positive, making the characteristic polynomial positive in
the interval λ ∈]λ2 , λ3 [. A similar sign change occurs in (3-4), meaning that an additional diag-
onal element has changed its sign from positive to negative. Hence, for λ ∈]λ 2 , λ3 [ exactly two
of the diagonal elements d11 , d22 , . . . , dnn are negative. Proceeding in this manner, it is seen
that if λ is placed somewhere in the interval ]λ m , λm+1 [, exactly m diagonal elements among
d11 , d22 , . . . , dnn are negative. This observation is contained in the following theorem.
3.1 Gauss Factorization of Characteristic Polynomials 55

Theorem 3.1: Let D be the diagonal matrix in the Gauss factorization of the coefficient matrix
K − λM of a generalized eigenvalue problem, and λ is an arbitrary parameter. Then, the num-
ber of negative components in the main diagonal of D is equal to the number of eigenvalues of
the generalized eigenvalue problem, which are smaller than the parameter λ entering the factor-
ization.

The theorem can be used to formulate bounds for any of the eigenvalues as demonstrated below
in Example 3.3. Actually, one can calculate say the jth eigenvalue λ j with arbitrary accuracy.
The method is simply to make an initial sequence of calculations of the characteristic polyno-
mial P (λ) as a function of λ by (3-4), until j components in the main diagonal of D are negative.
Next, one can perform additional calculations to reduce the interval, where the jth sign change
takes place. This procedure for calculation of eigenvalues is known as the telescope method.

Box 3.1: Gauss factorization of symmetric matrix

Gauss factorization reduces a symmetric matrix K of dimension n × n to an upper trian-


gular matrix S in a sequence of n − 1 matrix multiplications. After the first (i − 1) matrix
multiplications the following matrix is considered

K(i) = L−1 −1 −1
i−1 Li−2 · · · L1 K , i = 2, . . . , n (3–6)

where K(1) = K. Sequentially, the indicated matrix multiplications produce zeros below
the main diagonal of the columns j = 1, . . . , i − 1. Then, pre-multiplication of K (i) with
L−1
i will produce zeroes below the main diagonal of the ith column without affecting the
zeroes in the previous columns. L−1
i is a lower triangular matrix with units in the principal
diagonal, and where only the ith column is non-zero, given as

⎡ ⎤
1
⎢ ⎥
⎢0 1 ⎥
⎢. .. . . ⎥
⎢ .. . ⎥
⎢ . ⎥
⎢ ⎥
⎢0 0 ··· 1 ⎥
L−1
i =⎢ ⎥ (3–7)
⎢0 0 ··· 0 1 ⎥
⎢ ⎥
⎢0 0 ··· 0 −li+1,i 1 ⎥
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣. . . . . . ⎦
0 0 · · · 0 −ln,i 0 ··· 1
56 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

The components lj,i entering the ith column are given as


(i)
Kj,i
lj,i = (i)
, j = i + 1, . . . , n (3–8)
Ki,i
(i)
where Kj,i denotes the component in the jth row and ith column of K (i) . By insertion it
is proved that the inverse of (3-7) is given as

⎡ ⎤
1
⎢ ⎥
⎢0 1 ⎥
⎢. .. . . ⎥
⎢ .. . ⎥
⎢ . ⎥
⎢ ⎥
⎢0 0 ··· 1 ⎥
Li = ⎢ ⎥ (3–9)
⎢0 0 ··· 0 1 ⎥
⎢ ⎥
⎢0 0 ··· 0 li+1,i 1 ⎥
⎢ ⎥
⎢ .. .. .. .. .. . . ⎥
⎣. . . . . . ⎦
0 0 · · · 0 ln,i 0 ··· 1

Then, K(n) obtained after the (n − 1)th multiplication with L −1


n−1 , has zeroes in all the first
(n − 1) columns below the main diagonal, corresponding to an upper triangular matrix S.
Hence

L−1 −1 −1
n−1 Ln−2 · · · L1 K = S ⇒
K = LS , L = L1 L2 · · · Ln−1 (3–10)

Since, L defined by (3-10) is the product of lower triangular matrices with 1 in the main
diagonal, it becomes a matrix with the same structure as indicated by (3-2).

Because K is symmetric, S must have the structure

S = DLT (3–11)

where D is a diagonal matrix, given by (3-3). This proofs the validity of the factorization
(3-1).
3.1 Gauss Factorization of Characteristic Polynomials 57

Example 3.1: Gauss factorization of a three-dimensional matrix

Given the symmetric matrix

⎡ ⎤
5 −4 1
⎢ ⎥
K = K(1) = ⎣−4 6 −4⎦ (3–12)
1 −4 6

⎧ ⎡ ⎤

⎪ 5 −4 1

⎪ −1 (1) ⎢ ⎥
⎡ ⎤ ⎪
⎪ K = L1 K = ⎣0
(2)
2.8 −3.2⎦


1 0 0 ⎨ 0 −3.2 5.8
⎢ ⎥
L−1 = ⎣ 45 1 0⎦ ⇒ ⎡ ⎤ (3–13)
1


− 51 0 1 ⎪

1 0 0

⎪ ⎢ 4 ⎥

⎪ L(1)
= L1 = ⎣ − 1 0⎦
⎩ 5
1
5 0 1

⎧ ⎡ ⎤

⎪ 5 −4 1

⎪ ⎢ ⎥
⎡ ⎤ ⎪
⎪ K(3) = L−1
2 K
(2)
= S = ⎣0 2.8 −3.2⎦


1 0 0 ⎨ 0 0 2.1429
⎢ ⎥
L−1 = ⎣0 1 0⎦ ⇒ ⎡ ⎤ (3–14)
2


0 3.2
1 ⎪

1 0 0

⎪ ⎢ ⎥
2.8

⎪ L(2) = L1 L2 = L = ⎣−0.8 1 0⎦

0.2 −1.1429 1

From this follows that

⎡ ⎤ ⎡ ⎤
1 0 0 5 0 0
⎢ ⎥ ⎢ ⎥
L = ⎣−0.8 1 0⎦ , D = ⎣0 2.8 0 ⎦ (3–15)
0.2 −1.1429 1 0 0 2.1429

Example 3.2: Gauss factorization of a three-dimensional generalized eigenvalue problem

Of course for a given value of λ the matrix K − λM may be factorized according to the method explained in
Box 3.1. However, for smaller problems explicit expressions may be derived, as demonstrated in the following.
Given the mass- and stiffness matrices defined in Example 1.4, the components of L and D are calculated from the
following identities, cf. (1-79), (3-1)

⎡ ⎤⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 d11 0 0 1 l21 l31 d11 d11 l21 d11 l31
⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣l21 l 0⎦ ⎣ 0 d22 0 ⎦ ⎣0 1 l32 ⎦ = ⎣d11 l21 2
d22 + d11 l21 d11 l21 l31 + d22 l32 ⎦ =
l31 l32 1 0 0 d33 0 0 1 d11 l31 d11 l21 l31 + d22 l32 2
d33 + d11 l31 2
+ d22 l32
⎡ ⎤
2 − 12 λ −1 0
⎢ ⎥
⎣ −1 4−λ −1 ⎦ (3–16)
0 −1 2 − 12 λ
58 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

Equating the corresponding components on the left and right hand sides, provides the following equations for the
determination of the unknown quantities

1 1  ⎫
d11 = 2 − λ = − λ − 4 ⎪

2 2 ⎪



2 ⎪

d11 l21 = −1 ⇒ l21 = ⎪

λ−4 ⎪



1  λ2 − 8λ + 14 ⎪

=4−λ ⇒ d22 = 4 − λ + λ−4 
4 ⎪

2 = −
2
d22 + d11 l21 ⎪

2 λ−4 λ−4
(3–17)
λ−4 ⎪

d11 l21 l31 + d22 l32 = d22 l32 = −1 ⇒ l32 = 2 ⎪

λ − 8λ + 14 ⎪



1 ⎪

d33 + d11 l31 + d22 l32 = d33 − l32 = 2 − λ ⇒
2 2 ⎪



2    ⎪


1 λ−4 1 λ3 − 12λ2 + 44λ − 48 1 λ−2 λ−4 λ−6 ⎪ ⎪

d33 = 2 − + 2 =− = −
2 λ − 8λ + 14 2 λ2 − 8λ + 14 2 λ2 − 8λ + 14

Then, the following expression for the characteristic equation is obtained, in agreement with (1-80)

   
1  λ2 − 8λ + 14
1 λ − 2 λ − 4 λ − 6
1   
P (λ) = d11 d22 d33 = − λ−4 − − = − λ−2 λ−4 λ−6
2 λ−4 2 λ − 8λ + 14
2 4
(3–18)

Example 3.3: Bounds on eigenvalues

In this example bounds on the eigenvalues of the GEVP in Example 1.4 is constructed from the number of negative
components in the diagonal of the matrix D, using Theorem 3.1.

For λ = 1 one gets:

⎡ ⎤ ⎡ ⎤⎡ ⎤⎡ ⎤
−1 3
0 1 3
0 0 1 − 23 0
⎢ 2
⎥ ⎢ ⎥⎢ 2
⎥⎢ ⎥
K − λM = ⎣−1 3 −1⎦ ⇒ LDLT = ⎣− 32 1 ⎦ ⎣0 7
3 0 ⎦⎣ 1 − 73 ⎦ (3–19)
0 −1 3
2 0 −7
3
1 0 0 15
14 1

The components of the matrices L and D may be calculated by the formulas indicated in (3-17). As seen
d11 = 32 > 0, d22 = 73 > 0, d33 = 15
14 > 0. Hence, all three diagonal components are positive, from which
it is concluded that λ1 > λ = 1.

For λ = 8 one gets:

⎡ ⎤ ⎡ ⎤⎡ ⎤⎡ ⎤
−2 −1 0 1 −2 0 0 1 1
0
⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ 2
2⎥
K − λM = ⎣−1 −4 −1⎦ ⇒ LDLT = ⎣ 12 1 ⎦ ⎣ 0 −2
7
0⎦ ⎣ 1 7⎦
(3–20)
0 −1 −2 0 2
7 1 0 0 − 12
7 1

As seen d11 = −2 < 0, d22 = − 72 < 0, d33 = − 12


7 < 0. Hence, all three diagonal components are negative, from
which it is concluded that λ 3 < λ = 8.

For λ = 5 one gets:


3.2 Eigenvalue Separation Principle 59

⎤⎡ ⎡ ⎤⎡ ⎤⎡ ⎤
− 21 −1 0 1 − 12 0 0 1 2 0
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
K − λM = ⎣ −1 −1 −1 ⎦ ⇒ LDLT = ⎣2 1 ⎦⎣ 0 1 0⎦ ⎣ 1 −1⎦ (3–21)
0 −1 − 12 0 −1 1 0 0 − 23 1

As seen d11 = − 21 < 0, d22 = 1 > 0, d33 = − 32 < 0. Hence, two diagonal components are negative and one is
positive, from which it is concluded that λ 2 < λ = 5 < λ3 .

For λ = 3 one gets:

⎤ ⎡ ⎡ ⎤⎡ ⎤⎡ ⎤
−1 0 1
1 1
0 0 1 −2 0
⎢ ⎥ 2
⎢ ⎥ ⎢ 2
⎥⎢ ⎥
K − λM = ⎣−1 1 −1⎦ ⇒ LDLT = ⎣−2 1 ⎦ ⎣ 0 −1 0⎦ ⎣ 1 1⎦ (3–22)
0 −1 1
2 0 1 1 0 0 3
2 1

As seen d11 = 12 > 0, d22 = −1 < 0, d33 = 32 < 0. Hence, two diagonal components are positive and one is
negative, from which it is concluded that λ 1 < λ = 3 < λ2 .

In conclusion the following bounds prevail


1 < λ1 < 3 ⎪

3 < λ2 < 5 (3–23)


5 < λ3 < 8

3.2 Eigenvalue Separation Principle


The matrices M(m) and K(m) of dimension (n − m) × (n − m) are obtained from M and K,
if the last m rows and columns are omitted in these matrices. Then, consider the sequence of
related characteristic polynomials of the order (n − m)

 

P (m)
λ (m)
= det K − λ M
(m) (m) (m)
, m = 0, 1, . . . , n − 1 (3–24)

where M(0) = M, K(0) = K, λ(0) = λ and P (0) (λ) = P (λ). The eigenvalues corresponding to
(m) (m) (m)
M(m) and K(m) are denoted as λ1 , λ2 , . . . , λn−m.
 
Now, for any m = 0, 1, . . . , n − 1 it can be proved that the roots of P (m+1) λ(m+1) = 0 are
separating the roots of P (m) λ(m) = 0, i.e.

(m) (m+1) (m) (m+1) (m) (m+1) (m)


0 ≤ λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ · · · ≤ λn−m−1 ≤ λn−m−1 ≤ λn−m ≤ ∞ (3–25)
A formal proof of (3-25) has been given by Bathe.1 The sequence of polynomials P (m) (λ) with
roots fulfilling the property (3-25), is denoted a Sturm sequence. (3-25) is illustrated in Exam-
ple 3.4.

1
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
60 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

Next, consider the Gauss factorization (3-1). Omitting the last m rows and columns in M and
K is tantamount to omitting the last m rows and columns in L and D. Then

 
 T  
P (m) λ(m) = det K(m) − λ(m) M(m) = det L(m) D(m) L(m) = det D(m) =
d11 d22 · · · dn−m,n−m (3–26)

where

⎡ ⎤
1
⎢ ⎥
⎢ l21 1 ⎥
⎢ ⎥
L(m) =⎢
⎢ l31 l32 1 ⎥
⎥ (3–27)
⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
ln−m,1 ln−m,2 ln−m,3 ··· 1

⎡ ⎤
d11 0 ··· 0
⎢0 d ··· ⎥
⎢ 22 0 ⎥
D(m) =⎢ . . .. .. ⎥ (3–28)
⎣ .. .. . . ⎦
0 0 · · · dn−m,n−m

The bounding property explained in Theorem 3.1 for the case m = 0 can then easily be gener-
alized. Let λ(m) = µ, and perform a Gauss factorization on the matrix K(m) − µM(m) . Then
(m)
the number of eigenvalues, λj < µ, will be equal to number of negative diagonal components
d11 , . . . , dn−m,n−m in the matrix D.

The number of negative elements in main diagonal of the matrix D in the Gauss factorization of
K − λM = LDLT , and hence the number of eigenvalues smaller than λ, can then be retrieved
from the signs of the sequence P (0) (λ), P (1) (λ), . . . , P (n−1) (λ) as seen in the following way.

Introduce P (n) (λ) as an arbitrary positive quantity. Since P (n−1) (λ) = d11 , it follows that the
sequence P (n) (λ), P (n−1) (λ) has the sign sequence sign(P (n) (λ)), sign(P (n−1) (λ)) = +, −, if
d11 < 0, and the sign sequence +, +, if d11 > 0. d11 < 0 indicates that at least one eigenvalue
is smaller than λ, in which case one sign change, namely from + to −, has occurred in the
indicated sign sequence.

Next, P (n−2) (λ) = d11 d22 is considered. d11 < 0 ∧ d22 < 0 indicates that two eigenvalues
are smaller than λ. This in turns implies that P (n−1) (λ) has a negative sign, and P (n−2) (λ)
has a positive sign. Then, one additional sign change has occurred in the sequence of sign
of the characteristic polynomials sign(P (n) (λ)), sign(P (n−1) (λ)), sign(P (n−2) (λ))=+, −, +. If
d22 > 0, then P (n−1) (λ) and P (n−2) (λ) have the same sign, and no additional sign change is
recorded in the sequence of signs of the characteristic polynomials.
3.2 Eigenvalue Separation Principle 61

Proceeding in this way it is seen that the number of sign changes in the sequence of signs
sign(P (n) (λ)), sign(P (n−1) (λ)), . . . , sign(P (0) (λ)) determines the total number of eigenvalues
smaller than λ. This property of the sequence of characteristic polynomials is known as a Sturm
sequence check. In Example 3.5 it is illustrated, how the sign of the components d 11 , d22 , d33
for the case n = 3 can be retrieved from the sequence of signs of the Sturm sequence.

Example 3.4: Bounds on eigenvalues by eigenvalue separation principle

For the mass- and stiffness matrices defined in Example 1.4, the matrices M (1) and K(1) become

   
1
0 2 −1
M(1) = 2 , K(1) = (3–29)
0 1 −1 4

The characteristic equation (1-10) becomes

& '  √
(1)
2 − 12 λj −1 (1)
λ1 = 4 − 2 = 2.59
det (1) =0 ⇒ (1) √ (3–30)
−1 4 − λj λ2 = 4 + 2 = 5.41

The matrices M(2) and K(2) become

1  
(2)
M(2) = , K(2) = 2 ⇒ λ1 = 4 (3–31)
2

The relation (3-25) becomes

(1)
⎫ ⎧ (1) ⎧
λ1 ≤ λ1 ≤ λ2 ⎪ ⎪ 0 ≤ λ1 ≤ λ1

⎬ ⎪
⎨ ⎨ 0 ≤ λ1 ≤ 2.59

(1) (2)
λ1 ≤ λ1 ≤ λ2
(1)
⇒ (1)
λ1 ≤ λ2 ≤ λ2
(1)
⇒ 2.59 ≤ λ2 ≤ 5.41 (3–32)

⎪ ⎪
⎪ ⎪

(1) ⎭ ⎩ (1) 5.41 ≤ λ3 ≤ ∞
λ2 ≤ λ2 ≤ λ3 λ2 ≤ λ3 ≤ ∞

The exact solutions are λ 1 = 2, λ2 = 4, and λ3 = 6, cf. Example 1.4.

Example 3.5: Sturm sequences and correspondence to sign of components in D-matrix

(3)
Consider a generalized eigenvalue of order n = 3. For a given value of λ, the Sturm sequence P (λ), P (2) (λ),
62 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

P (1) (λ), P (0) (λ) is calculated. Below are shown the 8 possible sign sequences of the Sturm sequence.
⎧ ⎧ ⎧

⎪ ⎪
⎪ +++ ⎪ ++++
⎪ P (0) (λ) > 0 ⇒ d > 0

⎪ ⎪
⎪ ⎨

⎪ ⎪
⎪ P (1) (λ) > 0 ⇒ d22 > 0
33

⎪ ⎪


⎪ ⎪
⎪ ⎪
⎪ +++−

⎪ ⎪
⎪ ⎩ (0)

⎪ ++ ⎨ P (λ) < 0 ⇒ d33 < 0



⎪ P (2)
(λ) > 0 ⇒ d11 > 0 ⎧

⎪ ⎪
⎪ ⎪ ++−+

⎪ ⎪
⎪ ++− ⎪ P (0) (λ) > 0 ⇒ d < 0


⎪ ⎪
⎪ 33

⎪ ⎪
⎪ P (1) (λ) < 0 ⇒ d22 < 0

⎪ ⎪
⎪ ⎪

⎪ ⎪
⎪ ⎪ ++−−
⎩ (0)
+ ⎪
⎨ ⎩ P (λ) < 0 ⇒ d33 > 0
P (3) (λ) > 0 ⎧ +−++ (3–33)

⎪ ⎧

⎪ ⎪
⎪ ⎪


⎪ ⎪
⎪ +−+ ⎨ P (0) (λ) > 0 ⇒ d33 > 0

⎪ ⎪


⎪ ⎪
⎪ P (1)
(λ) > 0 ⇒ d22 < 0

⎪ ⎪
⎪ ⎪
⎪ +−+−

⎪ ⎪
⎨ ⎩

⎪ +− P (0) (λ) < 0 ⇒ d33 < 0

⎪ (2)


⎪ P (λ) < 0 d11 < 0
⎪ ⎧ +−−+

⎪ ⎪
⎪ ⎪

⎪ ⎪
⎪ ⎪
⎨ P (0) (λ) > 0 ⇒ d33 < 0

⎪ ⎪

+−−

⎪ ⎪
⎪ P (1)
(λ) < 0 ⇒ d > 0

⎪ ⎪

22


⎩ ⎩ ⎪ +−−−
⎩ (0)
P (λ) < 0 ⇒ d33 > 0

With an arbitrary positive value for P (3) (λ) the first curly bracket indicate how the sign of d 11 is retrieved from
the possible signs of P (2) (λ). The sign sequences ++ and +− have been indicated atop of P (2) (λ). At the
next level the sign of P (1) (λ) in combination to the previous sign sequence makes it possible to retrieve the
sign of d22 . Finally, at the 3rd level the sign of the characteristic polynomial P (0) (λ) in combination to the
previous sign sequence makes it possible to retrieve the sign of d 33 . As an example, the sequence of signs
sign(P (3) (λ)), sign(P (2) (λ)), sign(P (1) (λ)), sign(P (0) (λ))=+ − +− are obtained for the specific sign combi-
nation d11 < 0, d22 < 0 and d33 < 0. Moreover, there are three sign changes in the indicated sign sequence
+ − +−, and correspondingly all three components d 11 , d22 and d33 are negative. The reader is encouraged to
verify, that the number of sign changes in the sequence of signs at the lowest level in (3-33) always is equal to the
number of negative components in the specific combination of d 11 , d22 and d33 producing this sequence of signs.

Example 3.6: Physical interpretation of the eigenvalue separation principle

a) µ
F 1 2 j n−1 n F
u1 u2 u(x, t) uj uj+1 un−2 un−1
∆l
x
(1)
ω1 (0)
ω1
b)

(1)
ω2
(0)
ω2

Fig. 3–1 Vibrating string. a) Definition of elements and degrees of freedom. b) Undamped eigenmodes.
3.3 Shift 63

The vibrating string problem in Example 1.2 is considered again. The eigenvibrations of the discretized string is
given by (1-68) with M (0) = M and K(0) = K given by (1-69) or (1-70).

Next, consider the system defined by the matrices M (1) and K(1) of dimension (n − 2) × (n − 2), where the
last row and column are omitted in M(0) and K(0). Physically, this corresponds to constraining the displacement
un−1 (t) = 0, as indicated by the additional support in Fig. 1.3b. The corresponding eigenmodes of the continuous
system have been shown with a dashed signature. As seen in Fig. 1.3b the wave-lengths related to the circular
(0) (1) (0) (1)
eigenfrequencies ω 1 , ω1 , ω2 and ω2 decreases in the indicated order. Hence, the following ordering of these
eigenfrequencies prevails

(0) (1) (0) (1)


ω1 < ω1 < ω2 < ω2 (3–34)
(m)  (m) 2
Since λj = ωj , the corresponding ordering of the eigenvalues become

(0) (1) (0) (1)


λ1 < λ1 < λ2 < λ2 (3–35)

which corresponds to (3-25).

3.3 Shift
Occasionally, a shift on the stiffness matrix may be used to enhance the speed of calculation
of the considered GEVP. In order to explain this the eigenvalue problem (1-9) is written in the
following way

K − ρM + ρM − λj M Φ(j) = 0 (3–36)

Obviously, we have withdrawn and added the quantity ρM inside the bracket, where ρ is a
suitable real number, which will not affect neither the eigenvalues λ j , nor the eigenvectors Φ(j) .
(3-36) is rearranged on the form

K̂ − µj M Φ(j) = 0 (3–37)

where

K̂ = K − ρM , µ j = λj − ρ (3–38)
 
Hence, instead of the original generalized eigenvalue problem defined by the matrices K, M ,
 
the system with the matrices K̂, M is considered in the shifted system, where K̂ is calculated
as indicated in (3-38). The two systems have identical eigenvectors. However, the eigenvalues
of the shifted system become (λ1 − ρ), (λ2 − ρ), . . . , (λn − ρ), where λ1 , λ2 , . . . , λn denote the
eigenvalues of the original system.
64 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

For non-supported systems (e.g. ships and aeroplanes) a stiff-body motion Φ = 0 exists, which
fulfills

KΦ = 0 (3–39)

(3-39) shows that λ = 0 is an eigenvalue for such systems. Correspondingly, det(K) = 0 for
systems, which possesses a stiff-body motion. However, some numerical algorithms presume
that det(K) = 0. In such cases a preliminary shift on the stiffness matrix must be performed,
because det(K − ρM) = 0, if det(K) = 0.

Example 3.7: Shift on stiffness matrix


Given the mass- and stiffness matrices

   
2 1 3 −3
M= , K= (3–40)
1 2 −3 3

The characteristic equation (6-6) becomes

& ' 
3 − 2λ −3 − λ λ1 = 0
det =0 ⇒ (3–41)
−3 − λ 3 − 2λ λ2 = 6

λ1 = 0, since det(K) = 0.

Next, a shift on the stiffness matrix with ρ = −2 is performed, which provides

     
3 −3 2 1 7 −1
K̂ = +2 = (3–42)
−3 3 1 2 −1 7

Now, the characteristic equation becomes

& ' 
7 − 2µ −1 − µ µ1 = 2
det =0 ⇒ (3–43)
−1 − µ 7 − 2µ µ2 = 8
3.4 Transformation of GEVP to SEVP 65

3.4 Transformation of GEVP to SEVP


Some eigenvalue solvers are written for the special eigenvalue problem. Hence, their use pre-
sumes an initial transformation of the generalized eigenvalue problem (1-9). Of course, this
may be performed, simply by a pre-multiplication of (1-9) with M −1 . However, then the result-
ing system matrix M−1 K is no longer symmetric. In this section a similarity transformation is
indicated, which preserves the symmetry of the system matrix.

Since, M = MT it can be factorized on the form

M = SST (3–44)

The generalized eigenvalue problem (1-9) may then be written in the form

 −1 T (j)
K ST S Φ = λj SST Φ(j) ⇒
 T
S−1 K S−1 ST Φ(j) = λj ST Φ(j) (3–45)
 −1  −1 T
where the identity ST = S has been used. (1-9) can then be formulated in terms of
the following standard EVP

K̃Φ̃(j) = λj Φ̃(j) (3–46)

where

 T
K̃ = S−1 K S−1 (3–47)


Φ̃(j) = ST Φ(j) ⎬
 T (3–48)
Φ(j) = S−1 Φ̃(j) ⎭

(3-47) defines a similarity transformation with the transformation matrix S −1 , which diagonal-
ize the mass matrix. Similarity transformations is further explained in Chapter 6. Obviously,
K̃ = K̃T . As seen from (3-46) the eigenvalues λ1 , . . . , λn are identical for the original and
the transformed eigenvalue problem, whereas the eigenvectors Φ(j) and Φ̃(j) are related by the
transformation (3-48).

The determination of a matrix S fulfilling (3-44) is not unique. Actually, infinite many solutions
to this problem exist. Below, two approaches have been given. In both cases it is assumed that
M = MT is positive definite.
66 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

Generally, Choleski decomposition is considered the most effective way of solving the problem.
In this case a lower triangular matrix S is determined, so (3-44) is fulfilled. Obviously, S is
related to the Gauss factorization as follows

⎡√ ⎤
d11 0 ··· 0
⎢ √ ⎥
1 1 ⎢ 0 d22 ··· 0 ⎥
S = LD 2 , D2 = ⎢ .. .. .. .. ⎥ (3–49)
⎣ . . . . ⎦

0 0 ··· dnn
1
The diagonal matrix D 2 does only exist, if the components d ii of the matrix D are all positive.
This is indeed the case, if M is positive definite. Although, S may be calculated from (3-49),
there exists a faster and more direct algorithm for the determination of this quantity.

Alternatively, a so-called spectral decomposition of M may be used. The basis of this method
is the following SEVP for M

Mv(j) = ρj v(j) (3–50)

ρj and v(j) denotes the jth eigenvalue and eigenvector of M. Both are real, since M is symmet-
ric. The eigenvalue problems (3-50) can be assembled into the matrix formulation, cf. (1-14)

Mµ = VR (3–51)

⎡ ⎤
µ1 0 ··· 0
⎢0 µ ··· 0⎥
⎢ 2 ⎥
µ=⎢ . . .. .. ⎥ , V = [v(1) v(2) · · · v(n) ] (3–52)
⎣ .. .. . . ⎦
0 0 · · · µn

The eigenvectors are normalized to magnitude 1, i.e. v (i) T v(j) = δij . Then, the modal matrix
V fulfills, cf. (1-23)

V−1 = VT (3–53)

From (3-51) and (3-53) the following representation of M is obtained

M = VµVT (3–54)

Finally, from (3-44) and (3-54) the following solution for S is obtained
3.4 Transformation of GEVP to SEVP 67

⎡√ ⎤
µ1 0 ··· 0
⎢ √
1 1⎢ 0 µ2 ··· 0 ⎥ ⎥
S = Vµ 2 , µ =⎢
2
.. .. .. .. ⎥ (3–55)
⎣ . . . . ⎦

0 0 ··· µn

The drawback of the spectral approach is that an initial SEVP must be solved, before the trans-
formed eigenvalue problem (3-46) can be analyzed. Hence, the method requires the solution of
two SEVP of the same dimension.

Box 3.2: Choleski decomposition of symmetric positive definite matrix

Choleski decomposition factorizes a symmetric positive definite matrix M into the matrix
product of a lower triangular matrix S and its transpose, as follows

M = SST ⇒
⎡ ⎤ ⎡ ⎤⎡ ⎤
m11 m21 · · · mn1 s11 0 ··· 0 s11 s21 · · · sn1
⎢m ⎥ ⎢
· · · mn2 ⎥ ⎢ s21 s22 ··· ⎥⎢ 0 s · · · sn2 ⎥
⎢ 21 m22 0 ⎥⎢ 22 ⎥
⎢ . .. .. .. ⎥ =⎢ . .. .. .. ⎥⎢ . .. .. .. ⎥ =
⎣ .. . . . ⎦ ⎣ .. . . . ⎦⎣ .. . . . ⎦
mn1 mn2 · · · mnn sn1 sn2 · · · snn 0 0 · · · snn
⎡ ⎤
s211
⎢s s s222 + s221 ⎥
⎢ 21 11 symmetric ⎥
⎢ . .. .. ⎥ (3–56)
⎣ .. . . ⎦
sn1 s11 sn2 s22 + sn1 s21 · · · s2nn + s2n,n−1 + · · · + s2n2 + s2n1

Equating the components of the final matrix product with the component on and below
the main diagonal of M, equations can be formulated for the determinations of s ij , which

are solved sequentially. First s11 = m11 is calculated.
Next, si1 , i = 2, . . . , n are
determined from si1 = mi1 /s11 . Next, s22 = m22 − s21 is calculated, and si2 , i =
2

3, . . . , n can be determined from si2 = (m2i − si1 s21 )/s22 . Next, the 3th column can be
calculated and so forth. The general algorithm for calculating the components s ij in the
jth column reads

(
sjj = mjj − s2j,j−1 − · · · − s2j1 , j = 1, . . . , n
(3–57)
sij = (mij − si,j−1sj,j−1 − · · · − si1 sj1 )/sjj , i = j + 1, . . . , n
68 Chapter 3 – LINEAR EIGENVALUE PROBLEMS

3.5 Exercises
3.1 Given the same mass- and stiffness matrices as in Exercise 1.1.
(a.) Show that the eigenvalue separation principle is valid for the considered example.

3.2 Given the following mass- and stiffness matrices


   
2 0 6 −1
M= , K=
0 0 −1 4

(a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
(b.) Perform a shift ρ = 3 on K and calculate the eigenvalues and eigenmodes of the new
problem.

3.3 Given a symmetric matrix K.


(a.) Write a MATLAB program, which determines the matrices L and D of a Gauss fac-
torization as well as the matrix (S−1 )T , where S is a lower triangular matrix fulfilling
SST = K.

3.4 Given a symmetric positive definite matrix K.


(a.) Write a MATLAB program, which performs Choleski decomposition.
C HAPTER 4
APPROXIMATE SOLUTION METHODS

This chapter deals with various approximate solution methods for solving the generalized eigen-
value problem.

Section 4.1 consider the application of static condensation or Guyan reduction. 1 The idea of
the method is to reduce the magnitude of the generalized eigenvalue problem from n to n 1
n
degrees of freedom. Next, the reduced system is solved exact. In principle no approximation is
related to the procedure.

Section 4.2 deals with the application of Rayleigh-Ritz analysis. Similar to static condensation
this is a kind of system reduction procedure. As shown the method can be given a formulation
identical to static condensation. However, exact results are no longer obtained.

Section 4.3 deals with the bounding of the error related to a certain approximate eigenvalue.

4.1 Static Condensation


The basic assumption of static condensation is that inertia is confined to the first n 1 degrees of
freedom, whereas inertia effects are ignored for the remaining n2 = n − n1 degrees of freedom.
The approximation of the method stems from the ignorance of these inertial couplings. This
corresponds to the following partitioning of the mass and stiffness matrices

   
M11 0 K11 K12
M= , K= (4–1)
0 0 K21 K22

M11 and K11 are sub-matrices of dimension n1 × n1 , K12 = KT21 is a sub-matrix of dimension
n1 × n2 , and K22 is of the dimension n2 × n2 . The eigenvalue problems for the first n1 and
the last n2 eigenvectors can be assembled in the following partitioned matrix formulations, cf.
(1-14)
1
S.R.K. Nielsen: Structural Dynamics, Vol. 1. Linear Structural Dynamics, 4th Ed. Aalborg tekniske Univer-
sitetsforlag, 2004.

— 69 —
70 Chapter 4 – APPROXIMATE SOLUTION METHODS

      ⎫
K11 K12 Φ11 M11 0 Φ11 ⎪

= Λ1 ⎪


K21 K22 Φ21 0 0 Φ21 ⎪

      (4–2)


K11 K12 Φ12 M11 0 Φ12 ⎪
= Λ2 ⎪



K21 K22 Φ22 0 0 Φ22

where Λ1 and Λ2 are diagonal matrices of the dimension n1 × n1 and n2 × n2

⎡ ⎤ ⎡ ⎤
λ1 0 ··· 0 λn1 +1 0 ··· 0
⎢0 λ ··· ⎥ ⎢ 0 ··· 0⎥
⎢ 2 0 ⎥ ⎢ λn1 +2 ⎥
Λ1 = ⎢ . .. .. .. ⎥ , Λ2 = ⎢ . .. .. .. ⎥ (4–3)
⎣ .. . . . ⎦ ⎣ .. . . .⎦
0 0 · · · λn1 0 0 · · · λn

(j) (j)
Φ1 and Φ2 denote sub-vectors encompassing the first n1 and the last n2 components of the
jth eigenmode Φ(j) . Then, the matrices Φ11 , Φ12 , Φ21 and Φ22 entering (4-2) are defined as

(1) (2) ⎫
(n ) (n +1) (n +2) (n)
Φ11 = Φ1 Φ1 · · · Φ1 1 , Φ12 = Φ1 1 Φ1 1 · · · Φ1 ⎬
(1) (2) (4–4)
(n ) (n +1) (n +2) (n)
Φ21 = Φ2 Φ2 · · · Φ2 1 , Φ22 = Φ2 1 Φ2 1 · · · Φ2 ⎭

At first the solution for the first n1 eigenmodes is considered. From the lower lower half of the
first matrix equation in (4-2) follows

K21 Φ11 + K22 Φ21 = 0 ⇒

Φ21 = −K−1
22 K21 Φ11 (4–5)

From the corresponding upper half of the said matrix equation, and (4-5), follows

K11 Φ11 − K12 K−1


22 K21 Φ11 = M11 Φ11 Λ1 ⇒

K̃11 Φ11 = M11 Φ11 Λ1 (4–6)

where

K̃11 = K11 − K12 K−1


22 K21 (4–7)
4.1 Static Condensation 71
 
(4-6) is a generalized eigenvalue problem of reduced dimension n1 , which is solved for Λ1 , Φ11 .
Next, the remaining components of the first n 1 eigenmodes are calculated from (4-5). The
modal masses become

 T   
Φ11 M11 0 Φ11
m1 = = ΦT11 M11 Φ11 (4–8)
Φ21 0 0 Φ21

Hence, the total eigenmodes will be normalized to unit modal mass with respect to M, if the
sub-vectors Φ11 are normalized to unit modal mass with respect to M 11 .

Next, the solution for the last n 2 eigenmodes are considered. From the last matrix equation in
(4-2) follows

    
K11 K12 Φ12 M11 Φ12
Λ−1
2 = (4–9)
K21 K22 Φ22 0

Obviously, (4-9) is fulfilled for Λ−1 −1


2 = 0 ∧ Φ12 = 0. Λ2 = 0 implies that all n 2 eigenvalues
are equal to infinity. Hence, the following eigensolutions are obtained

⎡ ⎤
∞ 0 ··· 0    
⎢ 0 ∞ ··· 0 ⎥ Φ12 0
⎢ ⎥
Λ2 = ⎢ . . . . ⎥ , = (4–10)
⎣.. .
. . . .⎦
. Φ22 Φ22
0 0 ··· ∞

The matrix Φ22 is undetermined. Any matrix with linear independent column vectors will do.
Then, this quadratic matrix may simply be chosen as an n 2 × n2 unit matrix

Φ22 = I (4–11)

The modal masses become

 T   
0 M11 0 0
m2 = =0 (4–12)
Φ22 0 0 Φ22

Generally, if the ith row and the ith column in M are equal to zero, then Φ T = [0, . . . , 0, 1, 0, . . . , 0]
is an eigenvector with the eigenvalue λ = ∞. The modal mass is 0.

In praxis the calculation of K̃11 is solved by means of an initial Choleski decomposition of K 22 ,


cf. Box 3.2
72 Chapter 4 – APPROXIMATE SOLUTION METHODS

 −1 T −1
K22 = SST ⇔ K−1
22 = S S (4–13)

where both S and S−1 are lower triangular matrices. Then, K̃11 is determined from

K̃11 = K11 − RT R (4–14)

where the n2 × n2 matrix R is obtained as solution to the matrix equation

SR = K21 (4–15)

In principle (4-15) represent n2 linear equations with n2 right-hand sides. Given that S is a
lower triangular matrix, this is relatively easily solved.

Finally, it should be noticed that the static condensation approach is only of value if n 1
n.

Example 4.1: Static condensation


Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
0 0 0 0 2 −1 0 0
⎢ ⎥ ⎢ ⎥
⎢0 2 0 0⎥ ⎢−1 2 −1 0⎥
M=⎢ ⎥ , K=⎢ ⎥ (4–16)
⎣0 0 0 0⎦ ⎣ 0 −1 2 −1⎦
0 0 0 1 0 0 −1 1

The rows and columns are interchanged the mass and stiffness matrices, so the following eigenvalue problems are
obtained

⎡ ⎤ ⎡ ⎤ ⎫
2 0 −1 −1 ⎡ ⎤ 2 0 0 0 ⎡ ⎤⎡ ⎤⎪


⎢ 0
⎥ Φ ⎢ ⎥ Φ11 λ1 0 ⎪⎪

1 −1 0⎥ ⎢ 11 ⎥ ⎢0 1 0 0⎥ ⎢ ⎥⎢ ⎥⎪

⎢ ⎥⎣ ⎦=⎢ ⎥⎣ ⎦ ⎣ ⎦⎪

⎢ ⎥ ⎢ ⎥ ⎪

⎣−1 −1 2 0⎦ ⎣0 0 0 0⎦ ⎪
Φ21 Φ21 0 λ2 ⎪


−1 0 0 2 0 0 0 0 ⎪

⎡ ⎤ ⎡ ⎤ (4–17)

2 0 −1 −1 ⎡ ⎤ 2 0 0 0 ⎡ ⎤⎡ ⎤⎪



⎢ 0
⎥ ⎢
0⎥ ⎢Φ12 ⎥ ⎢0
⎥ Φ12
0⎥ ⎢ λ3 0 ⎪⎪


1 −1
⎥⎣
1 0 ⎥⎢ ⎥⎪

⎢ ⎥ ⎦=⎢

⎥⎣
⎥ ⎦ ⎣ ⎦⎪


⎣−1 −1 2 0⎦ Φ ⎣0 0 0 0⎦ Φ ⎪

22 22 0 λ4 ⎪


−1 0 0 2 0 0 0 0

A formal procedure for obtaining the mass and stiffness matrices in (4-17) by means of a similarity transformation
has been demonstrated in Example 6.1. Then,

       
2 0 −1 −1 2 0 2 0
K11 = , K12 = K21 = , K22 = , M11 = (4–18)
0 1 −1 0 0 2 0 1
4.1 Static Condensation 73

From (4-7) follows

    −1    
2 0 −1 −1 2 0 −1 −1 1 − 21
K̃11 = K11 − K12 K−1
22 K21 = − = (4–19)
0 1 −1 0 0 2 −1 0 −2
1 1
2

The reduced eigenvalue problem (4-6) becomes

   
1 − 12 2 0
Φ11 = Φ11 Λ1 (4–20)
− 21 1
2 0 1

The eigensolutions with eigenmodes normalized to modal mass 1 become

   √   
λ 0 1
− 42 0√ 1
− 21
Λ1 = 1 = 2
1
, Φ11 = √2 √ (4–21)
0 λ2 0 2 + 42 2
2
2
2

From (4-5) follows

 −1     √ √ 
2 0 −1 −1 1
− 12 1
+ 2
− 14 + 2
Φ21 =− √2 √ = 4 4 4 (4–22)
0 2 −1 0 2
2
2
2 1
4 − 14

From (4-10) and (4-11) follows

       
λ 0 ∞ 0 0 0 1 0
Λ2 = 3 = , Φ12 = , Φ22 = (4–23)
0 λ4 0 ∞ 0 0 0 1

After interchanging the degrees of freedom back to the original order (the 1st and 2nd components of Φ 11 and Φ12
are placed as the 2nd and 4th component of Φ (j) , the 1st and 2nd components of Φ 21 and Φ22 are placed as the
3rd and 1st component Φ (j) , the following eigensolution is obtained

⎡ ⎤ ⎡ √ ⎤ ⎫
λ1 0 0 0 1
− 2
0√ 0 0 ⎪

⎢ ⎥ ⎢
2 4
⎥ ⎪

⎢0 λ2 0 0⎥ ⎢ 0 1
+ 42 0 0⎥ ⎪

Λ=⎢ ⎥=⎢ 2 ⎥ ⎪

⎣0 0 λ3 0⎦ ⎣ 0 0 ∞ 0⎦ ⎪





0 0 0 λ4 0 0 0 ∞ ⎪

⎡ ⎤ (4–24)
1
− 14 0 1 ⎪ ⎪

⎥⎪⎪
4
  ⎢⎢ 1
− 12 0 0⎥ ⎪
⎥ ⎪

Φ = Φ(1) Φ(2) Φ(3) Φ(4) = ⎢ ⎪
⎥⎪
2√ √
⎢1 ⎪
⎣4 +
2
− 41 + 2
1 0⎦ ⎪⎪

√ 4 √ 4 ⎪

2 2
2 2 0 0
74 Chapter 4 – APPROXIMATE SOLUTION METHODS

4.2 Rayleigh-Ritz Analysis


Consider the generalized eigenvalue problem (6-5). If M is positive definite, so v T Mv > 0 for
any v = 0, the so-called Rayleigh quotient may be defined as

vT Kv
ρ(v) = (4–25)
vT Mv
It can be proved that ρ(v) fulfills the bounding, see Box 4.1

λ1 ≤ ρ(v) ≤ λn (4–26)

where λ1 and λn denote the smallest and the largest eigenvalues of the generalized eigenvalue
problem.

Especially, if v = Φ(1) , where Φ(1) has been normalized to unit modal mass, it follows that
Φ(1) T MΦ(1) = 1 and Φ(1) T KΦ(1) = λ1 , see (4-34) and (4-35) below. Then, ρ(v) = λ1 . This
property is contained in the so-called Rayleigh’s principle

λ1 = minn ρ(v) (4–27)


v∈R

Next, assume that v is M-orthogonal to Φ (1) , so Φ(1) T Mv = 0. Then, the following bounding
of the Rayleigh quotient may be proved, see Box 4.1

λ2 ≤ ρ(v) ≤ λn (4–28)

Correspondingly, λ2 may be evaluated by the following extension of the Rayleigh principle,


where the M-orthogonality of the test vector v to the first eigenmode Φ (1) has been included as
a restriction


⎨ v∈R
minn ρ(v)
λ2 = (4–29)
⎩ (1) T
Φ Mv = 0

The corresponding optimal vector will be v = Φ (2) .

Generally, if v is M-orthogonal to the first m − 1 eigenmodes Φ (1) , Φ(2) , . . . , Φ(m−1) , so


Φ(j) T Mv = 0 , j = 1, . . . , m − 1, the following bounding of the Rayleigh quotient may
be proved, see Box 4.1

λm ≤ ρ(v) ≤ λn , m<n (4–30)


4.2 Rayleigh-Ritz Analysis 75

Correspondingly, λm may be evaluated by the following extension of the Rayleigh variational


principle, where restriction of M-orthogonal of the test vector v to the eigenmodes Φ (j) =
0 , j = 1, . . . , m − 1 are included


⎨ v∈R
minn ρ(v)
λm = (4–31)
⎩ (j) T
Φ Mv = 0 , j = 1, . . . , m − 1

The corresponding optimal vector will be v = Φ (m) .

The Rayleigh quotient may be used to calculate an upper bound for the lowest eigenvalue λ 1 .
The quality of the estimate depends on the choice of v. The better the qualitative and quantita-
tive resemblance of v to the shape of the lowest eigenmode, the sharper will be the calculated
upper bound.

Box 4.1: Proof of boundings of the Rayleigh quotient

Given the linear independent eigenmodes, normalized to unit modal mass


Φ(1) , Φ(2) , . . . , Φ(n) . Using the eigenmodes as a vector basis, any n-dimensional
vector may be written as

v = q1 Φ(1) + q2 Φ(2) + · · · + qn Φ(n) (4–32)

Insertion of (4-32) into (4-25) provides

-
n -
n
qi qj Φ(i) T KΦ(j)
i=1 j=1 q12 λ1 + q22 λ2 + · · · + qn2 λn
ρ(v) = -n - n = (4–33)
q12 + q22 + · · · + qn2
qi qj Φ(i) T MΦ(j)
i=1 j=1

where the orthonormality conditions of the eigenmodes have been used in the last state-
ment, i.e.


0 , i = j
Φ(i) T MΦ(j) = (4–34)
1 , i=j

0 , i = j
Φ(i) T KΦ(j) = (4–35)
λi , i=j
76 Chapter 4 – APPROXIMATE SOLUTION METHODS

Given the following ordering of the eigenvalues

0 ≤ λ1 ≤ λ2 ≤ · · · ≤ λn−1 ≤ λn ≤ ∞ (4–36)

it follows directly from (4-33) that


q 2 λ1 + q 2 λ1 + · · · + qn2 λ1 ⎪
ρ(v) ≥ 1 2 2 2 = λ1 ⎪

q1 + q2 + · · · + qn2 ⎬
(4–37)
q 2 λn + q 2 λn + · · · + qn2 λn ⎪


ρ(v) ≤ 1 2 2 2 = λ n ⎭
q1 + q2 + · · · + qn 2

which proves the bounding (4-26).

(4-32) is pre-multiplied with Φ (j) T M. Then, use of (4-34) provides the following expres-
sion for the jth modal coordinate

qj = Φ(j) T Mv (4–38)

Hence, if v is M-orthogonal to Φ(j) , j = 1, . . . , m − 1 it follows that q1 = q2 = · · · =


qm−1 = 0. In this case (4-33) attains the form

2
qm 2
λm + qm+1 λm+1 + · · · + qn2 λn
ρ(v) = (4–39)
m+1 + · · · + qn
2 + q2
qm 2

Proceeding as in (4-37) it then follows that


2
qm 2
λm + qm+1 λm + · · · + qn2 λm ⎪
ρ(v) ≥ = λm ⎪


qm + qm+1 + · · · + qn
2 2 2 ⎬
(4–40)
q 2 λn + q 2 λn + · · · + qn2 λn ⎪

ρ(v) ≤ m 2 m+1 ⎪

= λn ⎭
2
qm + qm+1 + · · · + qn2

which proves the bounding (4-30).

The so-called Ritz analysis m linearly independent base vectors, Ψ (1) , . . . , Ψ(m) , are defined,
which span an m-dimensional subspace V m ⊆ Vn . Often the base vectors are determined as
the static deflections from m linearly independent load vectors f 1 , . . . , fm . This is preferred, be-
cause it often is simpler to specify static load, which will produce displacements qualitatively in
agreement with the eigenmodes to be determined by the analysis. The Ritz-basis is determined
from the equilibrium equation
4.2 Rayleigh-Ritz Analysis 77

KΨ = f ⇒ Ψ = K−1 f (4–41)


Ψ = Ψ(1) Ψ(2) · · · Ψ(m) , f = [f1 f2 · · · fm ] (4–42)

Then, any vector v ∈ Vm can be written on the form

⎡ ⎤ ⎡ ⎤
q1 q1
⎢ ⎥
⎢ q2 ⎥ ⎢ ⎥
⎢ q2 ⎥
v = q1 Ψ(1) + q2 Ψ(2) + · · · + qm Ψ(m) = Ψ(1) Ψ(2) · · · Ψ(m) ⎢ . ⎥ = Ψq , q = ⎢ . ⎥
⎣ .. ⎦ ⎣ .. ⎦
qm qm
(4–43)

The idea in Ritz analysis is to insert (4-43) into the Rayleigh quotient (4-25), and determine
the modal coordinates q1 , q2 , . . . , qm , which minimizes this quantity. Hence, the following re-
formulation of the Rayleigh quotient is considered

T
Ψq KΨq qT K̃q
ρ(q) =  T = (4–44)
Ψq MΨq qT M̃q

where


M̃ = ΨT MΨ = [M̃ij ] , M̃ij = Ψ(i) T MΨ(j)
(4–45)
K̃ = ΨT KΨ = [K̃ij ] , K̃ij = Ψ(i) T KΨ(j)

M̃ and K̃ are denoted as the projected mass- and stiffness matrices on the subspace spanned by
the Ritz basis Ψ.

The approximation to λ1 then follows from (4-27)

-
m -
m
qi K̃ij qj
i=1 j=1
λ1 ≤ ρ1 = min ρ(q) = min -m - m (4–46)
q∈Vm q1 ,...,qm
qi M̃ij qj
i=1 j=1

Generally, ρ1 is larger than λ1 in agreement with (4-26). Only for Φ(1) ∈ Vm will modal coor-
dinates q, . . . , qm exist, so Φ(1) = q1 Ψ(1) + · · · + qm Ψ(m) , with the implication that ρ 1 = λ1 .

The necessary condition for a minimum is that


78 Chapter 4 – APPROXIMATE SOLUTION METHODS

& '
∂ qT K̃q
=0 , i = 1, . . . , m ⇒
∂qi qT M̃q

 T  ∂
 T 
qT M̃q · ∂qi
q K̃q − qT K̃q · ∂qi
q M̃q
 2 =0 ⇒
qT M̃q

∂  T  ∂  T 
q K̃q − ρ q M̃q = 0 (4–47)
∂qi ∂qi
  - -m -m
Now, ∂q∂ i qT K̃q = ∂q∂ i mj=1 k=1 qj K̃ jk qk = 2 k=1 K̃ik qk , where the symmetry property,
  -
K̃jk = K̃kj (K̃ = K̃ ), has been applied. Similarly, ∂q∂ i qT M̃q = 2 m
T
k=1 M̃ik qk . Then, the
minimum condition (4-47) reduces to

%
m %
m
K̃ij qj − ρ M̃ij qj = 0 (4–48)
j=1 j=1

From (4-48) follows that ρ1 is determined as the lowest eigenvalue to the following generalized
eigenvalue problem of dimension m

K̃q − ρM̃q = 0 (4–49)

(4-49) has m eigensolutions (ρi , q(i) ), i = 1, . . . , m. ρi becomes an approximation to the ith


eigenvalue λi . The corresponding approximation to the ith eigenmode is calculated from

Φ̄(i) = q1,i Ψ(1) + · · · + qm,i Ψ(m) = Ψq(i) , i = 1, . . . , m (4–50)

where q1,i , . . . , q1m,i denote the components of q(i) .

The relations (4-50) can be assembled into the matrix equation

Φ̄ = ΨQ (4–51)

Φ̄ = Φ̄(1) Φ̄(2) · · · Φ̄(m) , Q = q(1) q(2) · · · q(m) (4–52)

We shall assume that the eigenvectors q(i) are normalized to unit modal mass with respect to
the projected mass matrix, i.e. the following orthonormality properties are fulfilled


0 , i = j
q(i) T M̃q(j) = (4–53)
1 , i=j
4.2 Rayleigh-Ritz Analysis 79


0 , i = j
q(i) T K̃q(j) = (4–54)
ρi , i=j

Then, the modal mass of the eigenmodes Φ̄ become

Φ̄T MΦ̄ = (ΨQ)T MΨQ = QT M̃Q = I (4–55)

Hence, the approximate eigenmodes Φ̄(i) will be normalized to unit modal mass, if this is the
case for the eigenvectors q(i) with respect to the projected mass matrix. Φ̄ forms an alternative
Ritz-basis in V m , which in addition is M-orthonormal. Similarly, the approximate eigenmodes
are K-orthogonal as follows

Φ̄T KΦ̄ = (ΨQ)T KΨQ = QT K̃Q = R (4–56)

where R is m-dimensional diagonal matrix with the eigenvalues ρ 1 , . . . , ρm in the main diago-
nal.

Obviously, the Rayleigh quotient approach corresponds to m = 1. Hence, Ritz analysis is


merely a multi-dimensional generalization, for which reason the name Rayleigh-Ritz analysis
has been coined for the method.

As a generalization to (4-26) the following boundings can be proved 2

λ1 ≤ ρ1 , λ2 ≤ ρ2 , ... , λm ≤ ρm ≤ λn (4–57)

2
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
80 Chapter 4 – APPROXIMATE SOLUTION METHODS

Box 4.2: Rayleigh-Ritz algorithm

1. Estimate m linearly independent static load vectors f 1 , . . . , fm , assembled column-


wise in the n × m matrix f = [f1 f2 · · · fm ].

2. Calculate the Ritz basis from Ψ = K−1 f , Ψ = Ψ(1) Ψ(1) · · · Ψ(m) .

3. Calculate projected mass and stiffness matrices in the m-dimensional subspace


spanned by the Ritz basis: M̃ = ΨT MΨ , K̃ = ΨT KΨ.

4. Solve the generalized eigenvalue problem of dimension m: K̃Q = M̃QR.

5. Determine approximations
(1) (2) to the lowest m eigenvector from the transformation Φ̄ =
ΨQ , Φ̄ = Φ̄ Φ̄ · · · Φ̄(m) . The corresponding approximate eigenvalues are
contained in the main diagonal of R.

Returning to the static condensation problem in Section 4.1, let us define a Ritz basis of the
dimension m = n1 as
⎡ ⎤
I
Ψ1 = ⎣ ⎦ (4–58)
−1
−K22 K21
where I is a unit matrix of dimension n 1 × n1 . Given the structure of the mass and stiffness
matrices in (4-1), we may then evaluate the following projected matrices

⎡ ⎤T ⎡ ⎤⎡ ⎤
I M11 0 I
M̃ = ΨT1 MΨ1 = ⎣ ⎦ ⎣ ⎦⎣ ⎦ = M11 (4–59)
−K−1
22 K21 0 −1
0 −K22 K21
⎡ ⎤T ⎡ ⎤⎡ ⎤
I K11 K12 I
K̃ = ΨT1 KΨ1 = ⎣ ⎦ ⎣ ⎦⎣ ⎦ = K11 − K12 K−1
22 K21 = K̃11
−K−1
22 K21 K21 K22 −1
−K22 K21
(4–60)
Hence, (4-49) reduce to the generalized eigenvalue problem (4-6), with Q = Φ11 , and R = Λ1 .
Consequently, static condensation may be interpreted as merely a Rayleigh-Ritz analysis with
the Ritz basis (4-58).

The following identity may be proved by insertion


⎡ ⎤⎡ ⎤ ⎡ ⎤
K11 K12 I
−1 I
⎣ ⎦⎣ ⎦ K11 − K12 K−1
22 K21 =⎣ ⎦ (4–61)
K21 K22 −K−1
22 K21 0
4.2 Rayleigh-Ritz Analysis 81

Then, we may construct an alternative Ritz basis from (4-41) with the static load given as the
right hand side of (4-61), i.e.

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
K11 K12 I I
−1
−1
Ψ2 = K f = ⎣ ⎦ ⎣ ⎦ = ⎣ ⎦ −1
K11 − K12 K22 K21 = Ψ1 K̃−1
11
K21 K22 0 −K−1
22 K 21
(4–62)
Hence, the base vectors in Ψ2 is a linear combination of the base vectors in Ψ 1 . Then, Ψ1 and
Ψ2 span the same subspace Vn1 , for which reason both bases will determine the same eigenval-
ues and eigenvectors.

The projected mass and stiffness matrices become

M̃ = ΨT2 MΨ2 = K̃−1 T −1 −1 −1


11 Ψ1 MΨ1 K̃11 = K̃11 M11 K̃11 (4–63)

K̃ = ΨT2 KΨ2 = K̃−1 T −1 −1


11 Ψ1 KΨ1 K̃11 = K̃11 (4–64)
Then, the modal matrices Q1 and Q2 , obtained as solutions to (4-49) for the respective Ritz
bases, are seen to be related as

Q1 = K̃−1
11 Q2 (4–65)
(4-65) follows from Φ̄ = Ψ1 Q1 = Ψ2 Q2 .

Example 4.2: Rayleigh-Ritz analysis


Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
1
0 0 2 −1 0
⎢ 2
⎥ ⎢ ⎥
M = ⎣ 0 1 0 ⎦ , K = ⎣−1 4 −1⎦ (4–66)
0 0 12 0 −1 2
which have the exact eigensolutions, cf. Example 1.5

⎡ ⎤⎡ ⎤ ⎡ √2 √ ⎤
λ1 0 0 2 0 0   2 −1 2
⎢ ⎥ ⎢ ⎥ ⎢√ √2 ⎥
Λ=⎣0 λ2 0 ⎦ = ⎣0 4 0⎦ , Φ = Φ(1) Φ(2) Φ(3) = ⎣ 22 0 − 22 ⎦ (4–67)
√ √
2 2
0 0 λ3 0 0 6 2 1 2

A two dimensional Rayleigh-Ritz analysis is performed, where the static load vectors are estimated as

⎡ ⎤
1 0
⎢ ⎥
f = ⎣0 0⎦ (4–68)
0 1
82 Chapter 4 – APPROXIMATE SOLUTION METHODS

The Ritz basis becomes

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 −1 0 1 0 7 1
⎢ ⎥ ⎢ ⎥ 1 ⎢ ⎥
Ψ = ⎣−1 4 −1⎦ ⎣0 0⎦ = ⎣2 2⎦ (4–69)
12
0 −1 2 0 1 1 7

The projected mass and stiffness matrices become

   
1 29 11 1 7 1
M̃ = , K̃ = (4–70)
144 11 29 12 1 7

The eigensolutions with modal masses normalized to 1 become

     
  √3 2
ρ 0 2.4 0
R= 1 = , Q = q(1) q(2) = 5 (4–71)
0 ρ2 0 4 √3
5
−2

The solutions for the eigenvectors become

⎡ ⎤ ⎡ ⎤
  7 1  3  √2 1
1 ⎢ ⎥ √ 2 ⎢ 15 ⎥
Φ̄ = Φ̄(1) Φ̄(2) = ⎣2 2⎦ 35 = ⎢√ 0⎥ (4–72)
12 √ −2 ⎣ 5 ⎦
1 7 5 √2
5
−1

As seen from (4-71) and (4-72) ρ 2 = 4 and Φ̄(2) are calculated exact, cf. (1-86). This is so, because Φ (2) is placed
in the subspace spanned by the selected Ritz basis as seen from the expansion

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
7 1 1
2 ⎢ ⎥ 2 ⎢ ⎥ ⎢ ⎥
Φ(2) = 2Ψ(1) − 2Ψ(2) = ⎣2⎦ − ⎣2⎦ = ⎣ 0⎦ (4–73)
12 12
1 7 −1

Next, a new analysis is performed, where the static load vectors are estimated as

⎡ ⎤
1 0
⎢ ⎥
f = ⎣1 1⎦ (4–74)
1 0

The Ritz basis becomes

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 −1 0 1 0 5 1
⎢ ⎥ ⎢ ⎥ 1⎢ ⎥
Ψ = ⎣−1 4 −1⎦ ⎣1 1⎦ = ⎣4 2⎦ (4–75)
6
0 −1 2 1 0 5 1

The projected mass and stiffness matrices become


4.3 Error Analysis 83

   
1 41 13 1 7 2
M̃ = , K̃ = (4–76)
36 13 5 3 2 1

The eigensolutions with modal masses normalized to 1 become

    √ √ 
ρ 0 2 0   2
−322
R= 1 = , Q = q(1) q(2) = √2
2

9 2
(4–77)
0 ρ2 0 6 2 2

The solutions for the eigenvectors become

⎡ ⎤ ⎡ √2 √ ⎤
  1 5 1  √2 √  − 2
⎢ ⎥ −322 ⎢√
2 √2 ⎥
Φ̄ = Φ̄(1) Φ̄(2) = ⎣4 2⎦ √22 √ = ⎣ 22 2
⎦ (4–78)
6 9 2 √ √2
5 1 2 2
2
2
− 22

       
In this case ρ1 , Φ̄(1) = λ1 , Φ(1) and ρ2 , Φ̄(2) = λ3 , Φ(3) . This is so, because Φ(1) and Φ(3) are placed
in the sub-space spanned by the selected Ritz basis.

4.3 Error Analysis


Given a certain approximation to the jth eigen-pair ( λ̄j , Φ̄(j) ), the error vector is defined as

εj = K − λ̄j M Φ̄(j) (4–79)

Presuming that the eigenvectors have been normalized to unit modal mass, it follows from (1-
19) and (1-21) that

 T  T
M = Φ−1 I Φ−1 , K = Φ−1 Λ Φ−1 (4–80)

Insertion of (4-80) into (4-79) provides

 T

εj = Φ−1 Λ − λ̄j I Φ−1 Φ̄(j) ⇒



−1
Φ̄ (j)
= Φ Λ − λ̄j I ΦT ε j (4–81)

We shall use the Euclidean vector norm ·E and the Hilbert matrix norm ·H in the following.
For a definition of these quantities, see Box 4.3. The mentioned norms are compatible, so

, (j) , ,
−1 , , , ,
−1 , , ,
,Φ̄ , ≤ ,,Φ Λ− λ̄ j I Φ T,
, ε j  E ≤ ,Φ, ,, Λ− λ̄ j I
, , T,
, Φ H εj E (4–82)
E H H H
84 Chapter 4 – APPROXIMATE SOLUTION METHODS

The last statement of (4-82) follows from the defining properties of matrix norms, see Box 4.3.

(Λ− λ̄j I) is a diagonal matrix. Then, (Λ− λ̄j I)−1 is also a diagonal matrix with the components
(λk − λ̄j )−1 , k = 1, . . . , n in the main diagonal. The eigenvalues of a diagonal matrix is equal
to the components in the main diagonal. Since, the Hilbert norm of a symmetric matrix is equal
to the numerical largest eigenvalue, it follows that

,
−1 ,  
, , 1 1
, Λ − λ̄j I , = max = (4–83)
H k=1,...,n |λk − λ̄j | min |λk − λ̄j |
k=1,...,n

The Hilbert norms of Φ and ΦT are identical as stated in Box 4.3. Further, it can be shown that,
see Box 4.4

1
Φ2H = (4–84)
µ1
where µ1 is the lowest eigenvalue of M.

Then, (4-82), (4-83) and (4-84) provides the following bounding of the calculated eigenvalue
λ̄j

1 εj E 1 |εj |
min |λk − λ̄j | ≤ = (4–85)
k=1,...,n µ1 Φ̄(j) E µ1 |Φ̄(j) |

(4-85) is only of value, if µ1 can be calculated relatively easily. This is the case for the special
eigenvalue problem, where M = I, which means that µ 1 = · · · = µn = 1, so (4-85) reduces to

εj E |εj |
min |λk − λ̄j | ≤ = (4–86)
k=1,...,n Φ̄(j) E |Φ̄(j) |
4.3 Error Analysis 85

Box 4.3: Vector and matrix norms

A vector norm is a real number v associated to any n-dimensional vector v, which
fulfills the following conditions

1. v > 0 for v = 0 and 0 = 0.

2. cv = |c| · v for any complex or real number c.

3. u + v ≤ u + v (triangle inequality).

The most common vector norms are


-n
1/p
1. p-norm (p ∈]0, ∞[): vp = |vi |p .
i=1

-
n
2. One norm (p = 1): v1 = |vi |.
i=1
-
n
1/2
3. Two norm (p = 2, Euclidean norm): v2 = |v| = |vi | 2
.
i=1

4. Infinity norm (p = ∞): v∞ = max |vi |.


i=1,...,n

where vi denotes the components of v. Given

⎧ √ √ √ 2
⎡ ⎤ ⎪
⎪ v1/2 = 1 + 3 + 2 = 17.19

⎪  
1 ⎨ v =
⎢ ⎥ 1 1+3+2 =6
v = ⎣−3⎦ ⇒  2 1/2 (4–87)

⎪ v2 = 1 + 32 + 22
2 ⎪

= 3.74

v∞ = max(1, 3, 2) = 3

A matrix norm is a real number A associated to any n × n matrix A, which fulfills the
following conditions

1. A > 0 for A = 0 and 0 = 0.

2. cA = |c| · A for any complex or real number c.

3. A + B ≤ A + B (triangle inequality).

4. AB ≤ AB.
86 Chapter 4 – APPROXIMATE SOLUTION METHODS

The most common matrix norms are


-
n
1. One norm: A1 = max |aij |.
j=1,...,n i=1

-
n
2. Infinity norm: A∞ = max |aij |.
i=1,...,n j=1

-
n -
n
1/2
3. Euclidean norm: AE = a2ij .
i=1 j=1


1/2
4. Hilbert norm (spectral norm): AH = max λi , where λi is the ith eigen-
i=1,...,n
value of AAT identical to the eigenvalues of A A, so AH = AT H .
T

aij denotes the components of A. Notice, if A = A T the eigenvalues of AAT = A2


becomes equal to the square of the eigenvalues of A. Given



⎪ A1 = max(2 + 3, 5 + 1) = 6


    ⎪
⎨ A∞ =
⎪ max(2 + 5, 3 + 1) = 7
2 −5 T 29 11  1/2
A= ⇒ AA = ⇒ A = 4 + 25 + 9 + 1 = 6.24
3 −1 11 10 ⎪

E
.

⎪ √ 

⎪ 13 
⎩ AH = 3 + 5 = 5.83
2
(4–88)

A matrix norm  · m is said to be compatible to a given vector norm  ·  v , if

Avv ≤ Am · vv (4–89)

It can be shown that the Hilbert matrix norm is compatible to the Euclidean vector norm,
that the one matrix norm is compatible to the one vector norm, and that the infinity matrix
norm is compatible to the infinity vector norm. However, the Euclidean matrix norm is
not compatible to the Euclidean vector norm.
4.3 Error Analysis 87

Box 4.4: Hilbert norm of modal matrix

Presuming that the columns of the modal matrix have been normalized to unit modal
mass, so m = I, it follows from (1-19) that

 −1 −1
M = ΦT Φ ⇒ M−1 = ΦΦT (4–90)

From the definition of the Hilbert norm in Box 4.3 and (4-90) follows that Φ 2H becomes
equal to the maximum eigenvalue of M −1 . If µ1 , µ2, . . . , µn denote the eigenvalues
of M in ascending order, then the eigenvalues in ascending order of M−1 become
1
µn
, . . . , µ12 , µ11 , so the maximum eigenvalue of M−1 is equal to µ11 . This proves (4-84).

Example 4.3: Bound on calculated eigenvalue

Given the mass- and stiffness matrices for the following special eigenvalue problem

⎡ ⎤ ⎡ ⎤
1 0 0 3 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 1 0⎦ , K = ⎣−1 2 −1⎦ (4–91)
0 0 1 0 −1 3
The eigensolutions with modal masses normalized to 1 are given as

⎡ ⎤⎡ ⎤ ⎡ √6 √
2
√ ⎤
3
λ1 0 0 1 0 0   6 2
⎢ ⎥ ⎢ ⎥ ⎢ √ √3 ⎥
Λ=⎣0 λ2 0 ⎦ = ⎣0 3 0⎦ , Φ = Φ(1) Φ(2) Φ(3) = ⎣ 2 6 6 0 − 33 ⎦ (4–92)
√ √ √
0 0 λ3 0 0 4 6
6
− 22 3
3

Assume that the following approximate solution , ( λ̄2 , Φ̄(2) ), has been calculated to the 2nd eigen-pair (λ 2 , Φ(2) )

⎡ ⎤
1.0 + (2) +
⎢ ⎥ +Φ̄ + = 1.4283
λ̄2 = 3.1 , Φ̄(2) = ⎣ 0.2⎦ ⇒ (4–93)
−1.0
Then, the error vector becomes, cf. (4-79)

⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤ ⎡ ⎤
3 −1 0 1 0 0 1.0 −0.30 + +
⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥ +ε2 + = 0.3852
ε2 = ⎝⎣−1 2 −1⎦ − 3.1 ⎣0 1 0⎦⎠ ⎣ 0.2⎦ = ⎣−0.22⎦ ⇒ (4–94)
0 −1 3 0 0 1 −1.0 −0.10
Since, M = I we may use the simplified result (4-86), which provides

0.3852
|λ2 − λ̄2 | ≤ = 0.26971 (4–95)
1.4283
Actually, |λ2 − λ̄2 | = 0.1.
88 Chapter 4 – APPROXIMATE SOLUTION METHODS

4.4 Exercises
4.1 Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
0 0 0 6 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 1⎦ , K = ⎣−1 4 −1⎦
0 1 1 0 −1 2

(a.) Perform a static condensation by the conventional procedure based on (4-5), (4-6),
and next by a Rayleigh-Ritz analysis with the Ritz basis given by (4-62).

4.2 Given the following mass- and stiffness matrices


⎡ ⎤ ⎡ ⎤
2 0 0 6 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 1⎦ , K = ⎣−1 4 −1⎦
0 1 1 0 −1 2

(a.) Calculate approximate eigenvalues and eigenmodes by a Rayleigh-Ritz analysis us-


ing the following Ritz basis
⎡ ⎤
1 1
⎢ ⎥
Ψ = [Ψ(1) Ψ(2) ] = ⎣1 −1⎦
1 1

4.3 Consider the mass- and stiffness matrices in Exercise 4.2, and let
⎡ ⎤
1
⎢ ⎥
v = ⎣1⎦
1

 
(a.) Calculate the vector Φ̄(1) = K−1 Mv, and next λ̄1 = ρ Φ̄(1) , as approximate solu-
tions to the lowest eigenmode and eigenvalue.
(b.) Establish the error bound for the obtained approximation to the lowest eigenvalue.
C HAPTER 5
VECTOR ITERATION METHODS

5.1 Introduction
   
In structural
 dynamics
 only a small number n 1 of the lowest eigen-pairs, λ 1 , Φ (1)
, λ 2 , Φ (2)
,
. . . , λn 1 , Φ (n1 )
, where n1
n, are of structural significance. Hence, there is a need for meth-
ods, which concentrate on the determination of the low-order modes. This is the underlying
motivation for most of the methods described in the following chapters.

It should be noticed that if λj is known, then Φ(j) can be determined from the linear, homoge-
neous equations, cf. (1-9)

 
K − λj M Φ(j) = 0 (5–1)

If λj is an eigenvalue, the coefficient matrix K −λj M is singular. Then, Φ(j) can be determined
within a common factor by solving a linear system of n − 1 equations as illustrated in Example
1.5.

On the other hand, if Φ(j) is known, the eigenvalue λj can be determined from the Rayleigh
quotient, cf. (4-25)

Φ(j) T KΦ(j)
λj = (5–2)
Φ(j) T MΦ(j)
Since, the eigenvalues are determined as solutions to the characteristic equation (1-10), which
can only be solved analytically for n ≤ 4, all solution methods for practical problems relies
implicitly or explicitly on iterative numerical schemes.

Iterative numerical solution methods may be classified in the following categories

Vector iteration methods operate directly on the generalized eigenvalue problem (5-1), so that
a certain eigenvalue and associated eigenmode are determined iteratively with increasing accu-
racy. Vector iteration methods are considered both in Chapters 5 and 7.

— 89 —
90 Chapter 5 – VECTOR ITERATION METHODS

Similarity transformation methods transform the generalized eigenvalue problem via a sequence
of similarity transformations, so the transformed mass and stiffness matrices eventually attain a
diagonal form. These methods are considered in Chapter 6.

Characteristic polynomial iteration methods operates directly or indirectly on the characteristic


equation (1-10). These methods are dealt with in Section 7.4.

5.2 Inverse and Forward Vector Iteration


The principle in inverse vector iteration may be explained in the following way. Given a start
vector, Φ0 . Based on the generalized eigenvalue problem (5-1), one may then calculate a new
vector Φ1 as follows

KΦ1 = MΦ0 ⇒ Φ1 = K−1 MΦ0 = AΦ0 (5–3)


where
A = K−1 M (5–4)
Clearly, if Φ0 = Φ(j) is an eigenmode, then Φ1 = λ1j Φ0 . If not so, we may consider Φ1
as another, and hopefully better approximation to the eigenmode. Next, based on Φ 1 we may
proceed to calculate a still better approximation Φ 2 from

KΦ2 = MΦ1 ⇒ Φ2 = AΦ1 (5–5)


This proceed may be continued until the convergence criteria Φ k+1 = 1
Φ
λj k
is fulfilled with
sufficient accuracy.

The inverse vector iteration algorithm may then be summarized as follows

Box 5.1: Inverse vector iteration algorithm

Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .

1. Calculate Φ̄k+1 = AΦk .

2. Normalize solution vector to unit modal mass, so Φ Tk+1 MΦk+1 = 1:

Φ̄k+1
Φk+1 = (
Φ̄Tk+1 MΦ̄k+1
5.2 Inverse and Forward Vector Iteration 91

Obviously, the algorithm requires that the stiffness matrix is non-singular, so the inverse K −1
exists. By contrast the mass matrix needs not be non-singular as is the case in Example 5.1 be-
low. After convergence the lowest eigenvalue is most accurately calculated from the Rayleigh
quotient (4-25).

In case the lowest eigenvalue is simple, i.e. that λ 1 < λ2 , the inverse iteration algorithm con-
verges towards the lowest eigenpair (λ1 , Φ(1) ). The solution vector obtained after the kth itera-
tion step, Φk , is an n-dimensional vector, which may be expanded in the basis formed by the n
undamped eigenmodes as follows


Φk = q1,k Φ(1) + q2,k Φ(2) + · · · + qn,k Φ(n) = Φqk ⎪



⎡ ⎤ ⎪

q1,k ⎪

⎢q ⎥ (5–6)
⎢ 2,k ⎥ ⎪

Φ = [Φ(1) Φ(2) · · · Φ(n) ] , qk = ⎢ . ⎥ ⎪

⎣ .. ⎦ ⎪



qn,k

The components of the vector qk denote the modal coordinates of the vector Φk . The expansion
(5-6) should be considered as formal, since the base vectors Φ(1) , Φ(2) , . . . , Φ(n) are unknown.
Actually, the whole analysis deals with the determination of these quantities. Similarly, the
expansion for Φ̄k+1 reads

Φ̄k+1 = Φq̄k+1 (5–7)

where q̄k+1 denotes a vector of modal coordinates of Φ̄k+1 . Insertion of (5-6) and (5-7) into the
iteration algorithm provides

KΦq̄k+1 = MΦqk ⇒

ΦT KΦ q̄k+1 = ΦT MΦ qk ⇒

Λq̄k+1 = qk (5–8)

where the orthogonality properties (1-19) and (1-21) have been used, and the eigenmodes are
assumed to be normalized to unit modal mass. The diagonal matrix Λ is given by (1-15). As
k → ∞ convergence implies that λj q̄k+1 = qk = Ψ(j) , where Ψ(j) signifies the eigenmode in
the modal space. This means that
92 Chapter 5 – VECTOR ITERATION METHODS

ΛΨ(j) = λj Ψ(j) ⇒
⎡ ⎤⎡ ⎤ ⎡ ⎤
λ1 0 ··· 0 Ψ1 Ψ1
⎢0 λ ··· ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 2 0 ⎥ ⎢ Ψ2 ⎥ ⎢ Ψ2 ⎥
⎢. . .. .. ⎥ ⎢ . ⎥ = λj ⎢ . ⎥ ⇒
⎣.. .
. . . ⎦ ⎣ .. ⎦ ⎣ .. ⎦
0 0 · · · λn Ψn Ψn
⎡ ⎤ ⎡ ⎤
Ψ1 0
⎢ . ⎥ ⎢.⎥
⎢ .. ⎥ ⎢ .. ⎥
⎢ ⎥ ⎢ ⎥
⎢Ψ ⎥ ⎢0⎥
⎢ j−1 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
Ψ(j) = ⎢ Ψj ⎥ = ⎢1⎥ (5–9)
⎢ ⎥ ⎢ ⎥
⎢Ψj+1 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎢ .. ⎥ ⎢ .. ⎥
⎣ . ⎦ ⎣.⎦
Ψn 0

The jth component of Ψ(j) is equal to 1, and the remaining components are zero.

Let the start vector be given as q0 = [1, . . . , 1]T . Then, the following sequence of results may
be calculated from (5-8)

⎡ ⎤⎡ ⎤ ⎡ 1 ⎤ ⎫
1
0 ··· 0 1 ⎪


λ1
⎥⎢ ⎥ ⎢ 1 ⎥
λ1 ⎪

⎢0 1
··· 0 ⎥ ⎢1⎥ ⎢ λ2 ⎥ ⎪



q1 = Λ q0 = ⎢ ⎥⎢ ⎥ ⎢ ⎥
−1 λ2
⎢ .. .. ⎥ ⎢ .. ⎥ = ⎢ .. ⎥ ⇒ ⎪

⎣.
.. ..
. ⎦ ⎣.⎦ ⎣ . ⎦ ⎪

. . ⎪



0 0 ··· 1
1 1 ⎪

λn λn ⎪

⎡ ⎤ ⎪

⎡ ⎤⎡ ⎤ ⎪

1
··· 1 1 ⎪

λ1
0 0 λ1 2
⎢ λ11 ⎥ ⎪

⎢ ⎥⎢ 1 ⎥ ⎢ 2⎥ ⎪

⎢0 1
··· 0 ⎥ ⎢ λ2 ⎥ ⎢ λ2 ⎥ ⎬
q2 = Λ−1 q1 = ⎢ ⎥⎢ ⎥
λ2
⎢ .. .. .. .. ⎥ ⎢ .. ⎥ = ⎢ .. ⎥ ⇒ ··· ⇒ (5–10)
⎣. . . . ⎦⎣ . ⎦ ⎢ ⎣.⎦
⎥ ⎪



··· 1 1 1 ⎪

0 0 λn λn λ2n ⎪



⎡ ⎤ ⎡ ⎤ ⎪

⎡1 ⎤ ⎡ ⎤ ⎪

0 ··· 0
1 1
1 ⎪

λ1 ⎢
k−1
λ1
⎥ ⎢ λk1
⎥ ⎪
⎢ ⎥⎢ 1 ⎥ ⎢ 1 ⎥ ⎢  λ1 k ⎥ ⎪


⎢0 1
··· 0 ⎥ ⎢ λk−1 ⎥ ⎢ λk ⎥ ⎢ ⎥ ⎪
−1
qk = Λ qk−1 =⎢
λ2 ⎥⎢ 2 ⎥ = ⎢ 2⎥ = 1 ⎢ λ2 ⎥⎪⎪
⎢ .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥ λk ⎢ .. ⎥⎪⎪
⎣. . . . ⎦⎣ . ⎦ ⎣ . ⎦ 1 ⎣ . ⎦⎪⎪

 λ1 k ⎪ ⎪

0 0 ··· 1
λn
1
k−1
1
k λn λn
λn

If λ1 < λ2 ≤ · · · ≤ λn it follows from (5-10) that


5.2 Inverse and Forward Vector Iteration 93

⎡ ⎤
1
⎢0⎥
⎢ ⎥
lim λk1 qk = ⎢ . ⎥ = Ψ(1) (5–11)
k→∞ ⎣ .. ⎦
0

Hence, the algorithm converge to Ψ(1) in the modal space. The corresponding convergence to
Φ(1) then takes place in the physical space.

As seen from (5-11), |qk | → 0 if λ1 > 1, and |qk | → ∞ if λ1 < 1. This is the rationale behind
the normalization to unit modal mass of the iteration vector, performed at each iteration step in
the algorithm in Box 5.1.

The relative error of the iteration vector after the kth iteration step is defined from

+ k +    2k  2k
+λ1 qk − Ψ(1) + + + 2k
+ λ λ λ1
+ + = +λ1 qk − Ψ + =
k 1 1
ε1,k = +Ψ(1) +
(1)
+ +···+ =
λ2 λ3 λn
 k   2k  2k
λ1 λ2 λ2
1+ +···+ (5–12)
λ2 λ3 λn
 k
From (5-12) follows, that the relative error at large values of k has the magnitude ε 1,k λλ12 .
Based on the asymptotic behavior of the relative error, the convergence rate is defined from

+ k+1 +
ε1,k+1 +λ qk+1 − Ψ(1) +
r1 = lim = lim ++ k1
+ =
k→∞ ε1,k k→∞ λ1 qk − Ψ(1) +
(  λ 2k+2  2k+2
λ1 1 + λ3
2
+ · · · + λλn2 λ1
lim (     = (5–13)
k→∞ λ2 2k 2k λ2
1 + λ2 λ3
+ · · · + λ2λn

The last statement of (5-13) presumes that the eigenvalue λ 2 is simple, i.e. that λ2 < λ3 . It
follows from (5-12) that the smaller is the fraction λλ12 the faster will the convergence to the first
eigenmode be. Hence, the convergence rate as defined by (5-13) should be small (despite lin-
guistic logics suggests the opposite). An vector iteration scheme, where the convergence rate is
proportional to λλ12 is said to have linear convergence. Hence, inverse vector iteration has linear
convergence.

The Rayleigh quotient based on Φk = Φqk becomes

ΦTk KΦk qTk ΦT KΦqk qTk Λqk


ρ(qk ) = = = (5–14)
ΦTk MΦk qTk ΦT MΦqk qTk qk
94 Chapter 5 – VECTOR ITERATION METHODS

The relative error of the Rayleigh quotient after the kth iteration step is defined from

ρ(qk ) − λ1
ε2,k = (5–15)
λ1

From (5-10) follows that

⎡ ⎤T ⎡ ⎤⎡ 1 ⎤ ⎫
1
··· ⎪

λ1 0
λk1
⎢1⎥
0 λk1 ⎪


⎢ k⎥ ⎥⎢⎢ 1 ⎥
⎥ ⎪


⎢ 0 λ2
⎢ λ2 ⎥ ··· 0 ⎥ ⎢ λk ⎥ 1 1 1 1 ⎪

qTk Λqk = ⎢ . ⎥ ⎢ ⎥⎢ 2⎥ = + + + · · · + ⎪

⎢ .. ⎥ ⎢ .. .. .. .. ⎥ ⎢ .. ⎥ λ2k−1 λ2k−1 λ2k−1 λ 2k−1 ⎪

⎣ ⎦ ⎣. . . . ⎦⎣ . ⎦ 1 2 3 n ⎪


1
0 0 ··· λn 1 ⎪

λkn λk
n
⎡ ⎤T ⎡ ⎤
1 1 ⎪

λk1 λk1 ⎪

⎢1⎥ ⎢1⎥ ⎪

⎢ k⎥ ⎢ k⎥ ⎪

⎢ λ2 ⎥ 1 1⎢ λ2 ⎥
1 1 ⎪

qTk qk = ⎢ . ⎥ ⎢ . ⎥ = 2k + 2k + 2k + · · · + 2k ⎪

⎢ .. ⎥ ⎢ .. ⎥ λ1 λ2 λ3 λn ⎪

⎣ ⎦ ⎣ ⎦ ⎪

1 1 ⎪

λkn λkn
(5–16)

Then, (5-15) may be written as

1
1
λ12k−1
+ 1
λ22k−1
+ 1
λ32k−1
+ · · · + λ2k−11
n
ε2,k = −1=
λ1 1
λ2k
+ λ12k + 1
λ2k
+ · · · + λ12k
1 2 3 n

 λ 2k−1  λ 2k−1
 2k−1
1+ 1
λ2
+ + · · · + λλn1
1
λ3
 2k  λ 2k  2k −1=
1 + λλ12 + λ13 + · · · + λλn1
 2k−1  λ2 2k−1 
λ1
  2k−1  
λ1 1−
λ3 λ2
1 − λλ13 + · · · + λλn2
+ 1− λ1
λn
 λ1 2k  λ1 2k  λ1 2k =
λ2 1 + λ2 + λ3 + · · · + λn
 2k−1  
λ1 λ1
1− +··· (5–17)
λ2 λ2

where the dots denote terms, which converge to zero as k → ∞. (5-17) shows that the relative
 2k−1
error of the Rayleigh quotient at large values of k has the magnitude ε 2,k λλ12 . Hence,
the relative error on the components of the eigenmode at a certain iteration step, as measured
by ε1,k , is significantly larger than the relative error on the eigenvalue estimate, as determined
by the Rayleigh quotient.

The convergence rate of the Rayleigh quotient is defined from


5.2 Inverse and Forward Vector Iteration 95

 λ 2k+1  λ1
  2
ε2,k+1 1
1− +··· λ1
r2 = lim = lim  λ2 2k−1  λ2
= (5–18)
k→∞ ε2,k k→∞ λ1 1− λ1
+··· λ2
λ2 λ2

Hence, the Rayleigh quotient has quadratic convergence in inverse vector iteration.

Example 5.1: Inverse vector iteration


Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 4.1. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration using the inverse iteration algorithm described in
Box 5.1 with the start vector

⎡ ⎤
1
⎢ ⎥
⎢1⎥
Φ0 = ⎢ ⎥ (5–19)
⎣1⎦
1

The matrix A becomes, cf. (5-5), (4-16)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 −1 0 0 0 0 0 0 0 2 0 1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢−1 2 −1 0⎥ ⎢0 2 0 0⎥ ⎢0 4 0 2⎥
A=⎢ ⎥ ⎢ ⎥=⎢ ⎥ (5–20)
⎣ 0 −1 2 −1⎦ ⎣0 0 0 0⎦ ⎣0 4 0 3⎦
0 0 −1 1 0 0 0 1 0 4 0 4

At the 1st and 2nd iteration step the following calculations are performed

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 0 2 0 1 1 3

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ ⎢0 4 0 2⎥ ⎢1⎥ ⎢6⎥

⎪ Φ̄1 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⇒ Φ̄T1 MΦ̄1 = 136

⎪ ⎣0 4 0 3⎦ ⎣1⎦ ⎣7⎦




⎨ 0 4 0 4 1 8
⎡ ⎤ ⎡ ⎤ (5–21)



⎪ 3 0.25725




⎪ 1 ⎢ ⎥ ⎢
⎢6⎥ ⎢0.51450⎥


⎪ Φ = √ ⎢ ⎥ = ⎢ ⎥


1
136 ⎣7⎦ ⎣0.60025⎦


8 0.68599

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 0 2 0 1 0.25725 1.7150

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ ⎢0 4 0 2⎥ ⎢0.51450⎥ ⎢3.4300⎥

⎪ Φ̄2 = ⎢ ⎥⎢ ⎥=⎢ ⎥ ⇒ Φ̄T2 MΦ̄2 = 46.588

⎪ ⎣0 4 0 3⎦ ⎣0.60025⎦ ⎣4.1160⎦




⎨ 0 4 0 4 0.68599 4.8020
⎡ ⎤ ⎡ ⎤ (5–22)



⎪ 1.7150 0.25126

⎪ ⎢ ⎥ ⎢ ⎥

⎪ 1 ⎢3.4300⎥ ⎢0.50252⎥

⎪ Φ2 = √ ⎢ ⎥=⎢ ⎥



⎪ 46.588 ⎣4.1160⎦ ⎣0.60302⎦

4.8020 0.70353
96 Chapter 5 – VECTOR ITERATION METHODS

The Rayleigh quotient based on Φ 2 provides the following estimate for λ 1 , cf. (4-25)

⎡ ⎤T ⎡ ⎤⎡ ⎤
0.25126 2 −1 0 0 0.25126
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢0.50252⎥ ⎢−1 2 −1 0⎥ ⎢0.50252⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣0.60302⎦ ⎣ 0 −1 2 −1⎦ ⎣0.60302⎦
0.70353 0 0 −1 1 0.70353
ρ(Φ2 ) = ⎡ ⎤T ⎡ ⎤⎡ ⎤ = 0.1464646 (5–23)
0.25126 0 0 0 0 0.25126
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢0.50252⎥ ⎢0 2 0 0⎥ ⎢0.50252⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣0.60302⎦ ⎣0 0 0 0⎦ ⎣0.60302⎦
0.70353 0 0 0 1 0.70353

The exact solutions are given as, cf. (4-24)

⎡ 1
⎡ ⎤ ⎤
0.25000
√ ⎢
4
⎥ ⎢ ⎥
1 2 ⎢ ⎥ ⎢0.50000⎥
1
λ1 = − = 0.1464466 , Φ(1) ⎢
= ⎢1 ⎥2√⎢
=⎢ ⎥ (5–24)
2 4 2⎥ ⎥
⎣4 + 4 ⎦ ⎣0.60355⎦

2
2 0.70711

The relative errors, ε 1 and ε2 , on the calculation of the eigenvalue and the 1st component of Φ (1) becomes

|Φ2 − Φ(1) | 0.00458 −3 ⎪

ε1,2 = = = 4.22 · 10 ⎪

|Φ |
(1) 1.0848 ⎬
(5–25)


ρ(Φ2 ) − λ1 0.1464646 − 0.1464466 ⎪
ε2,2 = = = 1.23 · 10−4 ⎪

λ1 0.1464466
As seen the relative error on the components of the eigenmode is significantly larger than the error on the Rayleigh
quotient.

The generalized eigenvalue problem (1-9) may be reformulated on the form

MΦ(1) = λ1 MK−1 MΦ(1) ⇒

Ψ(1) = λ1 MK−1 Ψ(1) ⇒

K−1 Ψ(1) = λ1 K−1 MK−1 Ψ(1) , Ψ(1) = MΦ(1) (5–26)

From (5-26) the following Rayleigh quotient may be defined

vT K−1 v
ρ(v) = (5–27)
vT K−1 MK−1 v
If v = Ψ(1) = MΦ(1) then (5-4) provides the limit λ 1 . An inverse vector iteration procedure
based on the formulation (5-26), (5-27) has been indicated in Box 5.2. The lowest eigenmode
5.2 Inverse and Forward Vector Iteration 97

Φ(1) can only be retrieved after convergence, if M−1 exists.

Box 5.2: Alternative inverse vector iteration algorithm

Given start vector Ψ0 . Repeat the following items for k = 0, 1, . . .

1. Calculate vk+1 = K−1 Ψk .

2. Calculate Ψ̄k+1 = Mvk+1 .

3. Calculate the Rayleigh quotient (5-29) for the test vector Ψk by


 
  T
vk+1 Ψk ΨTk K−1 Ψk
ρ Ψk = T = T −1
vk+1 Ψ̄k+1 Ψk K MK−1 Ψk

4. Normalize the new solution vector, so ΨTk+1 K−1 MK−1 Ψk+1 = 1


& '
Ψ̄k+1 Ψ̄k+1
Ψk+1 = ( = T
T
vk+1 Ψ̄k+1 Ψk K−1 MK−1 Ψk

5. After convergence the lowest eigenmode at the same iteration step is calculated from
Φk+1 = M−1 Ψk+1 .

Example 5.2: Alternative inverse vector iteration

Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 1.5. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration using the alternative inverse vector iteration
algorithm in Box 5.2 with the start vector

⎡ ⎤
1
⎢ ⎥
Φ0 = ⎣1⎦ (5–28)
1

The inverse stiffness matrix becomes, cf. (1-77)

⎡ ⎤−1 ⎡ ⎤
2 −1 0 7 2 1
⎢ ⎥ 1 ⎢ ⎥
K−1 = ⎣−1 4 −1⎦ = ⎣2 4 2⎦ (5–29)
12
0 −1 2 1 2 7

At the 1st and 2nd iteration steps the following calculations are performed
98 Chapter 5 – VECTOR ITERATION METHODS

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 7 2 1 1 5

⎪ 1 ⎢ ⎥⎢ ⎥ 1 ⎢ ⎥

⎪ v = ⎣2 4 2⎦ ⎣1⎦ = ⎣4⎦


1
12 6

⎪ 1 2 7 1 5





⎪ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤




1
0 0 5 5 5 5

⎪ 1 ⎢ 2
⎥⎢ ⎥ 1 ⎢ ⎥ 1 ⎢ ⎥ ⎢ ⎥ 41

⎪ Ψ̄1 = ⎣ 0 1 0 ⎦ ⎣4⎦ = ⎣8⎦ , T
v1 Ψ̄1 = ⎣4⎦ ⎣8⎦ =

⎪ 6 12 6 · 12 36


⎨ 0 0 1
2 5 5 5 5
(5–30)

⎪ ⎡ ⎤T ⎡ ⎤



⎪   v1 Ψ0
T 5 1
36 ⎢ ⎥ ⎢ ⎥ 84


⎪ ρ Ψ0 = vT Ψ̄1 = 6 · 41 ⎣4⎦ ⎣1⎦ = 41 = 2.0488



⎪ 1

⎪ 5 1



⎪ ⎡ ⎤ ⎡ ⎤



⎪ 5 0.3904

⎪ Ψ̄1 1 ⎢ ⎥ ⎢ ⎥

⎪ Ψ1 = = ( ⎣8⎦ = ⎣0.6247⎦

⎩ T
v1 Ψ̄1 12 · 36 5
41
0.3904

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 7 2 1 0.3904 0.3644

⎪ 1 ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ v2 = ⎣2 4 2⎦ ⎣0.6247⎦ = ⎣0.3384⎦

⎪ 12

⎪ 1 2 7 0.3904 0.3644





⎪ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤




1
0 0 0.3644 0.1822 0.3644 0.1822

⎪ ⎢ 2
⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ Ψ̄2 = ⎣ 0 1 0 ⎦ ⎣0.3384⎦ = ⎣0.3384⎦ , v2T Ψ̄2 = ⎣0.3384⎦ ⎣0.3384⎦ = 0.2473




⎨ 0 0 12 0.3644 0.1822 0.3644 0.1822
(5–31)

⎪ ⎡ ⎤T ⎡ ⎤



⎪   v2 Ψ1
T
1 ⎢
0.3644 0.3904

⎪ ⎥ ⎢ ⎥
⎪ ρ Ψ1 = vT Ψ̄2 = 0.2473 ⎣0.3384⎦ ⎣0.6247⎦ = 2.0055



⎪ 2

⎪ 0.3644 0.3904



⎪ ⎡ ⎤ ⎡ ⎤



⎪ 0.1822 0.3664

⎪ Ψ̄2 1 ⎢ ⎥ ⎢ ⎥

⎪ Ψ2 = = √ ⎣0.3384⎦ = ⎣0.6805⎦

⎩ T
v2 Ψ̄2 0.2473
0.1822 0.3664
The lowest eigenvector at the end of the 2nd iteration step becomes

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
1
0 0 0.3664 0.7328
⎢ 2
⎥ ⎢ ⎥ ⎢ ⎥
Φ2 = M−1 Ψ2 = ⎣ 0 1 0 ⎦ ⎣0.6805⎦ = ⎣0.6805⎦ (5–32)
1
0 0 2 0.3664 0.7328
The exact solutions are given as, cf. (1-87)

⎡ √2 ⎤ ⎡ ⎤
2 0.7071
⎢ √2 ⎥ ⎢ ⎥
Φ(1) = ⎣ 2 ⎦ = ⎣0.7071⎦ , λ1 = 2 (5–33)

2
2
0.7071
As for the simple formulation of the inverse vector iteration algorithm the convergence towards the exact eigen-
value takes place as a monotonously decreasing sequence of upper values, ρ(Ψ 0 ), ρ(Ψ1 ), . . ..
5.2 Inverse and Forward Vector Iteration 99

The principle in forward vector iteration may also be explained based on the eigenvalue problem
(5-1). Given a start vector, Φ0 , a new vector Φ1 may be calculated as follows

KΦ0 = MΦ1 ⇒ Φ1 = M−1 KΦ0 = BΦ0 (5–34)

where

B = M−1 K (5–35)

Clearly, if Φ0 = Φ(j) is an eigenmode, then Φ1 = λj Φ0 . If not so, a new and better approxi-
mation Φ2 may be calculated based on Φ1 as follows

Φ2 = BΦ1 (5–36)

The process may be continued until converge is obtained. The forward vector iteration algo-
rithm may be summarized as follows

Box 5.3: Forward vector iteration algorithm

Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .

1. Calculate Φ̄k+1 = BΦk .

2. Normalize solution vector to unit modal mass, so Φ Tk+1 MΦk+1 = 1

Φ̄k+1
Φk+1 = (
Φ̄Tk+1 MΦ̄k+1

Obviously, the algorithm requires that the mass matrix is non-singular, so the inverse M −1
exists. By contrast the stiffness matrix needs not be non-singular. After convergence the eigen-
value is calculated from the Rayleigh quotient.

In case the largest eigenvalue is simple, i.e. that λn−1 < λn , the forward iteration algorithm
converges towards the largest eigenpair λn , Φ(n) . The convergence rate of the eigenmode
estimate is linear, and the convergence rate of the Rayleigh quotient is quadratic in the fraction
λn−1
λn
. A proof of this has been given in Section 5.3.
100 Chapter 5 – VECTOR ITERATION METHODS

Example 5.3: Forward vector iteration

Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 1.4. Calculate
the largest eigenvalue and eigenvector by forward vector iteration using the forward vector iteration algorithm in
Box 5.3 with the start vector

⎡ ⎤
1
⎢ ⎥
Φ0 = ⎣0⎦ (5–37)
0

The matrix B becomes, cf. (5-35), (1-77)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
1
0 0 2 −1 0 4 −2 0
⎢ 2
⎥ ⎢ ⎥ ⎢ ⎥
B = ⎣0 1 0 ⎦ ⎣−1 4 −1⎦ = ⎣−1 4 −1⎦ (5–38)
0 0 1
2 0 −1 2 0 −2 4

At the 1st and 2nd iteration step the following calculations are performed

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 4 −2 0 1 4

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄ = ⎣−1 4 −1⎦ ⎣ 0⎦ = ⎣ −1 ⎦ ⇒ Φ̄T1 MΦ̄1 = 9


1


⎨ 0 −2 4 0 0
⎡ ⎤ ⎡ ⎤ (5–39)



⎪ 4 1.3333

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √ ⎣−1⎦ = ⎣−0.3333⎦


⎩ 9
0 0

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 4 −2 0 1.3333 6.0000

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄2 = ⎣−1 4 −1⎦ ⎣−0.3333⎦ = ⎣−2.6667⎦ ⇒ Φ̄T2 MΦ̄2 = 25.333




⎨ 0 −2 4 0 0.6667
⎡ ⎤ ⎡ ⎤ (5–40)



⎪ 6.0000 1.1921

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ2 = √ ⎣−2.6667⎦ = ⎣−0.5298⎦


⎩ 25.333
0.6667 0.1325

The Rayleigh quotient based on Φ 2 becomes

⎡ ⎤T ⎡ ⎤⎡ ⎤
1.1921 2 −1 0 1.1921
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣−0.5298⎦ ⎣−1 4 −1⎦ ⎣−0.5298⎦
0.1325 0 −1 2 0.1325
ρ(Φ2 ) = ⎡ ⎤T ⎡ ⎤⎡ ⎤ = 5.404 (5–41)
1
1.1921 0 0 1.1921
⎢ ⎥ ⎢2 ⎥⎢ ⎥
⎣−0.5298⎦ ⎣ 0 1 0 ⎦ ⎣−0.5298⎦
0.1325 0 0 12 0.1325

The results for the iteration vector and the Rayleigh quotient in the succeeding iteration steps become
5.3 Shift in Vector Iteration 101

⎡ ⎤ ⎫
1.0622 ⎪

⎢ ⎥ ⎪
Φ3 = ⎣−0.6276⎦ , ρ(Φ3 ) = 5.697 ⎪





0.2897 ⎪





⎡ ⎤ ⎪

0.9584 ⎪



⎢ ⎥
Φ4 = ⎣−0.6726⎦ , ρ(Φ4 ) = 5.855 (5–42)


0.4204 ⎪



⎡ ⎤ ⎪



0.8811 ⎪

⎢ ⎥ ⎪
Φ5 = ⎣−0.6923⎦ , ρ(Φ5 ) = 5.933 ⎪





0.5149 ⎪

The exact solutions becomes, cf. (6-54)

⎡ √ ⎤
2
⎡ ⎤
2 0.7071
⎢ √2 ⎥ ⎢ ⎥
Φ(3) = ⎣− 2 ⎦ = ⎣−0.7071⎦ , λ3 = 6 (5–43)

2
2 0.7071

The relative slow convergence of the algorithm to the exact solution is because the fraction λλ23 = 46 is relatively
high. Theoretically the relatively errors of the Rayleigh quotient after 5 iterations should be of the magnitude, cf.
(5-17)

4
2·5−1 4

ε2,5 1− = 0.0087 (5–44)


6 6
Actually, the error is slightly larger, namely

6 − 5.933
ε2,5 = = 0.0112 (5–45)
6

5.3 Shift in Vector Iteration


Shift on the stiffness matrix in the eigenvalue problem (5-1) as indicated by (3-36)-(3-38) may
be appropriate both in relation to inverse and forward vector iteration, either in order to obtain
convergence to other eigen-pairs than (λ1 , Φ(1) ) or (λn , Φ(n) ), or to improve the convergence
rate of the iteration process.

Let K̂ = K − ρM. denote a shift on the stiffness matrix as indicated by (3-38). The vector
iteration is next performed on the shifted eigenvalue problem (3-37). The algorithms in Box 5.1
and 5.3 remain unchanged, if the matrices A and B in (5-6) and (5-35) are redefined as follows

A = K̂−1 M (5–46)
102 Chapter 5 – VECTOR ITERATION METHODS

B = M−1 K̂ (5–47)
The Rayleigh quotient estimate of the eigenvalue λ j after the kth iteration step becomes

ΦTk K̂Φk
λ̄j = ρ(Φk ) + ρ = +ρ (5–48)
ΦTk MΦk
In the modal space the inverse vector iteration with shift on the stiffness matrix can be written
as, cf. (5-8)

K̂Φq̄k+1 = MΦqk ⇒

ΦT K − ρM Φ q̄k+1 = ΦT MΦ qk ⇒

Λ − ρI q̄k+1 = qk (5–49)
(5-49) is identical to (5-8), if λj is replaced with λj − ρ. With the same start vector q0 =
[1, . . . , 1]T as in (5-10), the solution vector after the kth iteration step becomes, cf. (5-10)
⎡ ⎤ ⎡  λ −ρ k ⎤
1 j
(λ1 −ρ)k
⎢ ⎥ ⎢ λ1 −ρ
.. ⎥
⎢ .. ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ .  ⎥
⎢ ⎥ ⎢ λj −ρ k ⎥
⎢ (λj−11−ρ)k ⎥ ⎢ λj−1 −ρ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ 1 ⎢ ⎥
⎢ ⎥
qk = ⎢ (λj −ρ)k ⎥ =
1 ⎢ 1 ⎥ (5–50)
(λ − ρ) k ⎢ ⎥
⎢ ⎥ j ⎢  ⎥
⎢ 1 ⎥ ⎢ λj −ρ ⎥ k
⎢ (λj+1 −ρ)k ⎥ ⎢ λj+1 −ρ ⎥
⎢ ⎥ ⎢ ⎥
⎢ .. ⎥ ⎢ .. ⎥
⎣ . ⎦ ⎣ .
 ⎦
1 λj −ρ k
(λn −ρ)k λn −ρ

where the jth eigenvalue fulfills

|λj − ρ| = min |λi − ρ| (5–51)


i=1,...,n

It then follows from (5-50) that


⎡ ⎤
0
⎢.⎥
⎢ .. ⎥
⎢ ⎥
⎢0⎥
 k ⎢ ⎥
⎢ ⎥
lim λj − ρ qk = ⎢1⎥ = Ψ(j) (5–52)
k→∞ ⎢ ⎥
⎢0⎥
⎢ ⎥
⎢ .. ⎥
⎣.⎦
0
5.3 Shift in Vector Iteration 103

Hence, for a value of ρ fulfilling (5-51) the algorithm converge to Ψ(j) in the modal space. In
physical space the algorithm then converge to Φ (j) . The convergence rate of the eigenmode
becomes, cf. (5-13)

+ + + +
+ λj − ρ + + λj − ρ +
r1 = max ++ +,+ + (5–53)
λj−1 − ρ + + λj+1 − ρ +

Then, the corresponding convergence rate of the Rayleigh quotient is given as r 2 = r12 .

a) λ
0 λ1 λj−1 λj ρ λj+1 λn−1 λn

b) λ
0 ρ λ1 λj−1 λj λj+1 λn−1 λn

c) λ
0 λ1 λj−1 λj λj+1 λn−1 λn ρ

Fig. 5–1 Optimal position of shift parameter at inverse vector iteration. a) Convergence towards λ j . b) Conver-
gence towards λ1 . c) Convergence towards λ n .

In case inverse vector iteration towards the jth eigenmode is attempted, the shift parameter
should be place in the vicinity of λ j as shown on Fig. 5.1a in order to obtain a small con-
vergence rate. It should be emphasized that any inverse vector iteration with shift should be
accompanied with a Sturm sequence check to insure that the calculated eigenvalue is indeed the
λj .

At inverse vector iteration towards the lowest eigenmode the convergence rate r 1 = |λ1 −
ρ|/|λ2 − ρ| should be minimized. Hence, ρ should be placed close to but below λ 1 , as shown
on Fig. 5.1b.

At inverse vector iteration towards the highest eigenmode the convergence rate r 1 = |λn−1 −
ρ|/|λn − ρ| should be minimized. Hence, ρ should be placed close to but above λ n , as shown
on Fig. 5.1c.

In case of forward iteration with shift, (5-49) provides the solution after k iterations
104 Chapter 5 – VECTOR ITERATION METHODS

⎡ ⎤ ⎡  λ −ρ k ⎤
1
k
(λ1 − ρ)
⎢ ⎥ ⎢ λj −ρ.. ⎥
⎢ .
.. ⎥ ⎢ ⎥
⎢ ⎥ ⎢ .
 ⎥
⎢(λ − ρ)k ⎥ ⎢ λj−1 −ρ k ⎥
⎢ j−1 ⎥ ⎢ λj −ρ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
qk = ⎢ (λj − ρ)k ⎥ = (λj − ρ)k ⎢
⎢ 1 ⎥
⎥ (5–54)
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢  k ⎥
⎢(λj+1 − ρ)k ⎥ ⎢
λj+1 −ρ

⎢ ⎥ ⎢ λj −ρ

⎢ .. ⎥ ⎢ . ⎥
⎣ . ⎦ ⎣ .
. ⎦

λn −ρ k
(λn − ρ)k
λj −ρ

where the jth eigenvalue fulfills

|λj − ρ| = max |λi − ρ| (5–55)


i=1,...,n

Clearly, (5-55) has the solutions λj = λ1 or λj = λn . The former occurs, if ρ is closest to λn ,


and the latter if ρ is closest to λ1 . Then, it follows from (5-54) that

1
k qk = Ψ
(j)
lim  , j = 1, n (5–56)
k→∞ λj − ρ

For a value of ρ fulfilling (5-55) the algorithm converge to Ψ(j) in the modal space, or to Φ(j)
in the physical space. Forward iteration with shift always converge to either the lowest or the
highest eigenmode depending on the magnitude of the shift parameter. The convergence rate of
the iteration vector becomes

+ + + + + + + +
+ λ1 − ρ + + λj−1 − ρ + + λj+1 − ρ + + λn − ρ +
+
r1 = max + + +
,...,+ + , + + +
,...,+ + (5–57)
λj − ρ + λj − ρ + + λj − ρ + λj − ρ +

Shift in forward vector iteration is not as useful as in inverse vector iteration, because the optimal
choice of the shift parameter is more difficult to specify. At forward vector iteration towards the
highest eigenmode the optimal shift parameter is typically placed somewhere in the middle of
the eigenvalue spectrum. Especially for ρ = 0, (5-57) becomes

λn−1
r1 = (5–58)
λn
as stated in Section 5.2 on forward iteration without shift.

Example 5.4: Forward vector iteration with shift


5.3 Shift in Vector Iteration 105

The problem in Example 5.3 is considered again. However, now a shift with ρ = 3 is performed on the stiffness
matrix.

The matrix K̂ becomes, cf. (3-38), (1-77)

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2 −1 0 1
0 0 1
−1 0
⎢ ⎥ ⎢2 ⎥ ⎢ 2

K̂ = ⎣−1 4 −1⎦ − 3 ⎣ 0 1 0 ⎦ = ⎣−1 1 −1⎦ (5–59)
0 −1 2 0 0 1
2 0 −1 1
2

The matrix B becomes, cf. (5-47), (1-77)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
1
0 0 1
−1 0 1 −2 0
⎢ 2
⎥ ⎢ 2 ⎥ ⎢ ⎥
B = ⎣0 1 0 ⎦ ⎣−1 1 −1⎦ = ⎣−1 1 −1⎦ (5–60)
0 0 1
2 0 −1 1
2 0 −2 1

At the 1st and 2nd iteration step the following calculations are performed

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 1 −2 0 1 1

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎪ Φ̄T1 MΦ̄1 = 1.5
⎪ Φ̄1 = ⎣−1


1 −1⎦ ⎣0⎦ = ⎣−1⎦ ⇒


⎨ 0 −2 1 0 0
⎡ ⎤ ⎡ ⎤ (5–61)



⎪ 1 0.8165

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √ ⎣−1⎦ = ⎣−0.8165⎦


⎩ 1.5
0 0

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 1 −2 0 0.8165 2.4495

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄2 = ⎣−1 1 −1⎦ ⎣−0.8165⎦ = ⎣−1.6330⎦ ⇒ Φ̄T1 MΦ̄1 = 7




⎨ 0 −2 1 0 1.6330
⎡ ⎤ ⎡ ⎤ (5–62)



⎪ 2.4495 0.9258

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ2 = √ ⎣−1.6330⎦ = ⎣−0.6172⎦


⎩ 7
1.6330 0.6172

The Rayleigh quotient estimate of λ 3 based on Φ2 becomes, cf. (5-48)

⎡ ⎤T ⎡ ⎤⎡ ⎤
0.9258 1
−1 0 0.9258
⎢ ⎥ ⎢ 2
⎥⎢ ⎥
⎣−0.6172⎦ ⎣−1 1 −1⎦ ⎣−0.6172⎦
0.6172 0 −1 1
2 0.6172
λ̄3 = ρ(Φ2 ) + 3 = ⎡ ⎤T ⎡ ⎤⎡ ⎤ + 3 = 2.9048 + 3 = 5.9048 (5–63)
1
0.9258 0 0 0.9258
⎢ ⎥ ⎢2 ⎥⎢ ⎥
⎣−0.6172⎦ ⎣ 0 1 0 ⎦ ⎣−0.6172⎦
0.6172 0 0 12 0.6172

The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become
106 Chapter 5 – VECTOR ITERATION METHODS

⎡ ⎤ ⎫
0.7318 ⎪

⎢ ⎥ ⎪
Φ3 = ⎣−0.7318⎦ , λ̄3 = 5.9891 ⎪





0.6273 ⎪





⎡ ⎤ ⎪

0.7331 ⎪



⎢ ⎥
Φ4 = ⎣−0.6982⎦ , λ̄3 = 5.9988 (5–64)


0.6982 ⎪



⎡ ⎤ ⎪



0.7100 ⎪

⎢ ⎥ ⎪
Φ5 = ⎣−0.7100⎦ , λ̄3 = 5.9999 ⎪





0.6983 ⎪

The results in (5-64) should be compared to those in (5-42). As seen the convergence of the shifted problem is
much faster.

5.4 Inverse Vector Iteration with Rayleigh Quotient Shift


As demonstrated in Section 5.3 the convergence properties of inverse vector towards the lowest
mode are improved if a shift on the stiffness matrix is performed with a shift parameter fulfilling
ρ λ1 . The idea in the present section is to update the shift parameter at each iteration step with
the most recent estimate of the lowest eigenvalue. Assume, that an estimate of the eigenvalue
λ̄1 is known after the kth iteration step. Then, a shift with the parameter ρ k = λ̄1 is performed,
so a new un-normalized eigenmode estimate is calculated at the (k + 1)th iteration step from


−1
Φ̄k+1 = K − ρk M MΦk (5–65)

where Φk is the normalized estimate of the eigenmode after the kth iteration step.

A new estimate of the eigenvalue, and hence the shift parameter, then follows from (5-48)

 
Φ̄Tk+1 K − ρk M Φ̄k+1
ρk+1 = + ρk (5–66)
Φ̄Tk+1 M Φ̄k+1
 
The convergence towards λ1 , Φ(1) is not safe, since the first shift determined by ρ 1 may cause
convergence towards other eigen-pairs, especially if the first and second eigenvalue are close.
For this reason the first couples of iteration steps are often performed without shift. When the
convergence towards the first eigen-pair takes place, the convergence rate of the Rayleigh quo-
tient estimate of the eigenvalue will be cubic, i.e. r 2 = ( λλ12 )3 . Additionally, the length of the
converge process is very much dependent on the start vector, as demonstrated in the succeeding
Example 5.5. Even though the convergence may be fast it should be realized that the process
requires inversion of the matrix K − ρk M at each iteration step, which may be expensive for
5.4 Inverse Vector Iteration with Rayleigh Quotient Shift 107

large systems.

Box 5.4: Algorithm for inverse vector iteration with Rayleigh quotient shift

Given start vector Φ0 , which needs not be normalized to unit modal mass, and set the
initial shift to ρ0 = 0. Repeat the following items for k = 0, 1, . . .

−1
1. Calculate Φ̄k+1 = K − ρk M MΦk .

2. Calculate new shift parameter (new estimate on the eigenvalue) from the Rayleigh
quotient estimate based on Φ̄k+1 by
 
Φ̄Tk+1 K − ρk M Φ̄k+1  
ρk+1 = + ρk estimate on λ 1
Φ̄Tk+1 M Φ̄k+1

3. Normalize the new solution vector to unit modal mass

Φ̄k+1
Φk+1 = (
Φ̄Tk+1 M Φ̄k+1

Example 5.5: Inverse vector iteration with Rayleigh quotient shift


Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 1.4. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration with Rayleigh quotient shift with the start vector

⎡ ⎤
1
⎢ ⎥
Φ0 = ⎣0⎦ (5–67)
0

At the 1st and 2nd iteration step the following calculations are performed
108 Chapter 5 – VECTOR ITERATION METHODS

⎧ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ 2 −1 0 1
0 0 2 −1 0

⎪ ⎢ ⎥ ⎢2 ⎥ ⎢ ⎥

⎪ K̂ = ⎣−1 4 −1⎦ − 0 · ⎣ 0 1 0 ⎦ = ⎣−1 4 −1⎦



⎪ 0 −1 2 0 0 1
0 −1 2

⎪ 2



⎪ ⎡ ⎤−1 ⎡ ⎤⎡ ⎤ ⎡ ⎤



⎪ 2 −1 0 1
0 0 1 0.2917

⎪ ⎢ ⎥ ⎢2 ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄ = ⎣−1 4 −1⎦ ⎣ 0 1 0 ⎦ ⎣0⎦ = ⎣0.0833⎦ ⇒ Φ̄T1 MΦ̄1 = 0.05035


1


⎨ 0 −1 2 0 0 1
2 0 0.0417
(5–68)

⎪ ⎡ ⎤T ⎡ ⎤⎡ ⎤

⎪ 2 −1


0.2917 0 0.2917


1 ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎪ ρ1 = 0.05035 ⎣0.0833⎦ ⎣−1


4 −1⎦ ⎣0.0833⎦ + 0 = 2.8966

⎪ 0 −1

⎪ 0.0417 2 0.0417



⎪ ⎡ ⎤ ⎡ ⎤



⎪ 0.2917 1.2999

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √ ⎣0.0833⎦ = ⎣0.3714⎦

⎩ 0.05035
0.0417 0.1857

⎧ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ 2 −1 0 1
0 0 0.5517 −1.0000 0.0000

⎪ ⎢ ⎥ ⎢ 2
⎥ ⎢ ⎥

⎪ K̂ = ⎣−1 4 −1⎦ − 2.8966 · ⎣ 0 1 0 ⎦ = ⎣−1.0000 1.1034 −1.0000⎦



⎪ 0 −1 2 0 0 12 0.0000 −1.0000 0.5517





⎪ ⎡ ⎤−1 ⎡ ⎤⎡ ⎤ ⎡ ⎤



⎪ 0.5517 −1.0000 0.0000 1
0 0 1.2999 −0.0567

⎪ ⎢ ⎥ ⎢2 ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄2 = ⎣−1.0000 1.1034 −1.0000⎦ ⎣ 0 1 0 ⎦ ⎣0.3714⎦ = ⎣−0.6812⎦ ⇒ Φ̄T2 MΦ̄2 = 1.0342




⎨ 0.0000 −1.0000 0.5517 0 0 12 0.1857 −1.0664

⎪ ⎡ ⎤T ⎡ ⎤⎡ ⎤

⎪ −0.0567 0.5517 −1.0000 0.0000 −0.0567

⎪ 1 ⎢

⎪ ⎥ ⎢ ⎥⎢ ⎥
⎪ ρ2 =
⎪ ⎣−0.6812⎦ ⎣−1.0000 1.1034 −1.0000⎦ ⎣−0.6812⎦ + 2.8966 = 2.5206

⎪ 1.0342

⎪ −1.0664 0.0000 −1.0000 0.5517 −1.0664





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.0567 −0.0557




1 ⎢ ⎥ ⎢ ⎥

⎪ Φ2 = √ ⎣−0.6812⎦ = ⎣−0.6698⎦
⎩ 1.0342
−1.0664 −1.0486
(5–69)
The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become

⎡ ⎤ ⎫
0.9011 ⎪

⎢ ⎥ ⎪

Φ3 = ⎣0.6830⎦ , ρ3 = 2.0793 ⎪





0.5049 ⎬
⎡ ⎤ (5–70)


−0.6985 ⎪

⎢ ⎥ ⎪

Φ4 = ⎣−0.7073⎦ , ρ4 = 2.0001 ⎪




−0.7152
Despite the shifts the convergence is very slow during the 1st and 2nd iteration step. Not until the 3rd and 4th step
a fast speed-up of the convergence takes place. This is due to the poor guess of the start vector.
5.5 Vector Iteration with Gram-Schmidt Orthogonalization 109

5.5 Vector Iteration with Gram-Schmidt Orthogonaliza-


tion
Inverse vector iteration or forward
 vector iteration
 with Gram-Schmidt orthogonalization is
used, when other eigen-pairs than λ1 , Φ (1)
or λn , Φ(n) are wanted.

Assume, that the eigenmodes Φ (1) , Φ(2) , . . . , Φ(m) , m < n, have been determined. Next, the
eigenmode Φ(m+1) is wanted using inverse vector iteration by means of the algorithm in Box
5.1. In order to prevent the algorithm to converge toward Φ (1) a cleansing of the vector Φk+1
for information about the first m eigenmodes is performed by a so-called Gram-Schmidt or-
thogonalization . In this respect the following modified iteration vector iteration algorithm is
considered

%
m
Φ̂k+1 = Φ̄k+1 − cj Φ(j) (5–71)
j=1

Inspired by the variational problem (4-31), where the test vector v is chosen to be M-orthogonal
to the previous determined eigenmodes, the modified iteration vector Φ̂k+1 is chosen to be M-
orthogonal on Φ(1) , Φ(2) , . . . , Φ(m) , i.e.

Φ(i) T MΦ̂k+1 = 0 , i = 1, . . . , m (5–72)

(5-71) is premultiplied with Φ (i) T M. Assuming that the calculated eigenmodes have been
normalized to unit modal mass, it follows from (1-17), (5-71) and (5-72) that the expansion
coefficients c1 , c2 , . . . , cm are determined from

%
m
(i) T
0=Φ MΦ̄k+1 − cj Φ(i) T MΦ(j) = Φ(i) T MΦ̄k+1 − ci ⇒
j=1

ci = Φ(i) T MΦ̄k+1 (5–73)

After insertion of the calculated expansion coefficients into (5-71), Φ̂k+1 is considered as the
estimate to Φ(m+1) at the (k + 1)th iteration step. The convergence takes place with the linear
convergence rate r1 = λλm+1
m+2
.

In principle the orthogonalization process need only to be performed after the first iteration
step, since all succeeding iteration vectors then will be orthogonal to the subspace spanned by
Φ(1) , Φ(2) , . . . , Φ(m) . However, round-off errors inevitable introduce information about the first
eigenmode. Obviously, the use of this so-called vector deflation method becomes increasingly
cumbersome as m increases.

A similar orthogonalization process can be performed in relation to forward vector iteration to


ensure convergence to eigenmodes somewhat lower than the highest.
110 Chapter 5 – VECTOR ITERATION METHODS

Box 5.5: Algorithm for inverse vector iteration with Gram-Schmidt orthogonalization

Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .

1. Calculate Φ̄k+1 = K−1 MΦk .

2. Orthogonalize iteration vector to previous calculated eigenmodes Φ (j) , j = 1, . . . , m


%
m
Φ̂k+1 = Φ̄k+1 − cj Φ(j) , cj = Φ(j) T MΦ̄k+1
j=1

3. Normalize the orthogonalized iteration vector to unit modal mass

Φ̂k+1
Φk+1 = (
Φ̂Tk+1 M Φ̂k+1

Example 5.6: Inverse and forward vector iteration with Gram-Schmidt orthogonalization

Given the following mass- and stiffness matrices


⎡ ⎤ ⎡ ⎤
2 0 0 0 5 −4 1 0
⎢ ⎥ ⎢ ⎥
⎢0 2 0 0⎥ ⎢−4 6 −4 1⎥
M=⎢ ⎥ , K=⎢ ⎥ (5–74)
⎣0 0 1 0⎦ ⎣ 1 −4 6 −4⎦
0 0 0 1 0 1 −4 5

Further, assume that the lowest and highest eigenmodes have been determined by inverse and forward vector
iteration

⎡ ⎤ ⎡ ⎤
0.31263 −0.10756
⎢ ⎥ ⎢ ⎥
⎢0.49548⎥ ⎢ 0.25563⎥
Φ(1) =⎢ ⎥ , Φ(4) =⎢ ⎥ (5–75)
⎣0.47912⎦ ⎣−0.72825⎦
0.28979 0.56197

Calculate Φ(2) by inverse vector iteration with deflation, and Φ (3) by forward vector iteration with deflation. In
both cases the following start vector is used

⎡ ⎤
1
⎢ ⎥
⎢1⎥
Φ0 = ⎢ ⎥ (5–76)
⎣1⎦
1

The matrices A and B become


5.5 Vector Iteration with Gram-Schmidt Orthogonalization 111

⎡ ⎤ ⎫
2.4 3.2 1.4 0.8 ⎪

⎢ ⎥ ⎪

⎢3.2 5.2 2.4 1.4⎥ ⎪

A = K−1 M = ⎢ ⎥ ⎪

⎣2.8 4.8 2.6 1.6⎦ ⎪





1.6 2.8 1.6 1.2 ⎬
⎡ ⎤⎪ (5–77)
2.5 −2.0 0.5 0.0 ⎪⎪

⎢ ⎥⎪⎪
⎢−2.0 3.0 −2.0 0.5⎥ ⎪

B = M−1 K = ⎢ ⎥⎪⎪

⎣ 1.0 −4.0 6.0 −4.0 ⎪
⎦ ⎪


0.0 1.0 −4.0 5.0

At the 1st iteration step in the inverse iteration process towards Φ (2) the following calculations are performed

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 2.4 3.2 1.4 0.8 1 7.8

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ ⎢3.2 5.2 2.4 1.4⎥ ⎢1⎥ ⎢11.2⎥

⎪ Φ̄ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⇒ c1 = Φ(1) T MΦ̄1 = 24.7067

⎪ 1 =
⎣ ⎦ ⎣ ⎦
=
⎣ ⎦

⎪ 2.8 4.8 2.6 1.6 1 11.8



⎪ 1.6 2.8 1.6 1.2 1 7.2





⎪ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ 7.8 0.31263 0.07595


⎨ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢11.2⎥ ⎢0.49548⎥ ⎢−0.04158⎥
Φ̂1 = ⎢ ⎥ − 24.7067 ⎢ ⎥=⎢ ⎥ ⇒ Φ̂T1 MΦ̂1 = 0.01801 (5–78)

⎪ ⎣11.8⎦ ⎣0.47912⎦ ⎣−0.03740⎦



⎪ 7.2 0.28979 0.04016





⎪ ⎡ ⎤ ⎡ ⎤



⎪ 0.07595 0.56599

⎪ ⎢ ⎥ ⎢ ⎥

⎪ 1 ⎢−0.04158⎥ ⎢−0.30989⎥

⎪ Φ = √ ⎢ ⎥ = ⎢ ⎥


1
0.01801 ⎣−0.03740⎦ ⎣−0.27871⎦



0.04016 0.29927

The results for the iteration vector in the succeeding iteration steps become

⎡ ⎤ ⎫
0.61639 ⎪

⎢ ⎥ ⎪

⎢−0.14318⎥ ⎪

Φ2 = ⎢ ⎥ ⎪

⎣−0.42383⎦ ⎪





−0.13960 ⎪





⎡ ⎤ ⎪

0.53412 ⎪

⎢ ⎥ ⎪

⎢ 0.02582⎥ ⎪

Φ3 = ⎢ ⎥ ⎬
⎣−0.48439⎦ (5–79)


−0.43985 ⎪ ⎪



.. ⎪



. ⎪
⎡ ⎤⎪


0.44527 ⎪


⎢ ⎥⎪

⎢ 0.12443⎥⎪

Φ13 =⎢ ⎥⎪

⎣−0.48944⎦ ⎪⎪



−0.57702

The process converged with the indicated digit after 13 iterations.


112 Chapter 5 – VECTOR ITERATION METHODS

At the 1st iteration step in the forward iteration process towards Φ (3) the following calculations are performed

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 2.5 −2.0 0.5 0.0 1 1.0

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ ⎢−2.0 3.0 −2.0 0.5⎥ ⎢1⎥ ⎢−0.5⎥

⎪ Φ̄ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⇒ c4 = Φ(4) T MΦ̄1 = 1.38144

⎪ 1 =
⎣ ⎦ ⎣ ⎦
=
⎣ ⎦

⎪ 1.0 −4.0 6.0 −4.0 1 −1.0



⎪ 0.0 1.0 −4.0 5.0 1 2.0





⎪ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ −0.10756


1.0 1.14859
⎨ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢−0.5⎥ ⎢ 0.25563⎥ ⎢−0.85314⎥
Φ̂1 = Φ̄1 − c4 Φ(4) = ⎢ ⎥ − 1.38144 ⎢ ⎥=⎢ ⎥ ⇒ Φ̂T1 MΦ̂1 = 5.59161

⎪ ⎣−1.0⎦ ⎣−0.72825⎦ ⎣ 0.00604⎦



⎪ 2.0 0.56197 1.22367





⎪ ⎡ ⎤ ⎡ ⎤



⎪ 1.14859 0.48573

⎪ ⎢ ⎥ ⎢ ⎥

⎪ 1 ⎢−0.85314⎥ ⎢−0.36079⎥
⎪ Φ1 = √
⎪ ⎢ ⎥ = ⎢ ⎥

⎪ 5.59161 ⎣ 0.00604⎦ ⎣ 0.00256⎦



1.22367 0.51748
(5–80)
The results for the iteration vector in the succeeding iteration steps become

⎡ ⎤⎫
0.44542 ⎪⎪
⎢ ⎥⎪⎪
⎢−0.41392⎥ ⎪

Φ2 = ⎢ ⎥⎪⎪
⎣−0.02891⎦ ⎪




0.50962 ⎪⎪




⎡ ⎤⎪⎪

0.44063 ⎪⎪
⎢ ⎥⎪⎪
⎢−0.41617⎥ ⎪

Φ3 = ⎢ ⎥⎬
⎣−0.02534⎦ (5–81)


0.51445 ⎪⎪



.. ⎪



. ⎪
⎡ ⎤⎪⎪

0.43867 ⎪⎪

⎢ ⎥⎪⎪
⎢−0.41674⎥ ⎪


Φ9 = ⎢ ⎥⎪
⎣−0.02322⎦ ⎪




0.51696
The process converged with the indicated digit after 9 iterations.

Based on the Rayleigh quotient estimates of the obtained eigenmodes the following eigenvalues may be calculated,
cf. (5-2)

⎡ ⎡⎤ ⎤
λ1 0 0 0 0.09654 0 0 0
⎢ ⎥ ⎢ ⎥
⎢0 λ2 0 0⎥ ⎢ ⎥
Λ=⎢ ⎥=⎢ 0 1.39147 0 0 ⎥ (5–82)
⎢ ⎥ ⎢ ⎥
⎣0 0 λ3 0⎦ ⎣ 0 0 4.37355 0 ⎦
= 0 0 λ4 0 0 0 10.6384
5.6 Exercises 113

5.6 Exercises
5.1 Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
2 0 0 6 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 1⎦ , K = ⎣−1 4 −1⎦
0 1 1 0 −1 2

(a.) Perform two inverse iterations, and then calculate an approximation to λ 1 .


(b.) Perform two forward iterations, and then calculate an approximation to λ 3 .

5.2 Given the following mass- and stiffness matrices


⎡ ⎤ ⎡ ⎤
1
0 0 2 −1 0
⎢2 ⎥ ⎢ ⎥
M = ⎣0 1 0 ⎦ , K = ⎣−1 4 −1⎦
0 0 12 0 −1 2

The eigenmodes Φ(1) are Φ(3) are known to be, cf. (1-87)
⎡ √2 ⎤ ⎡ √
2

2 2
⎢√ ⎥ √
⎢ 2⎥
Φ(1) = ⎣ 22 ⎦ , Φ(3) = ⎣− 2 ⎦
√ √
2 2
2 2

(a.) Calculate Φ(2) by means of Gram-Schmidt orthogonalization, and calculate all eigen-
values.

5.3 Given the following mass- and stiffness matrices


⎡ ⎤ ⎡ ⎤
4 1 0 0 0 2 −1 0 0 0
⎢1 4 1 0 0⎥ ⎢−1 2 −1 0⎥
⎢ ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥
M = ⎢0 1 4 1 0⎥ , K = ⎢ 0 −1 2 −1 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 1 4 1⎦ ⎣ 0 0 −1 2 −1⎦
0 0 0 1 4 0 0 0 −1 2
(a.) Write a MATLAB program, which calculates the lowest three eigenvalues and eigen-
modes of the related generalized eigenvalue problem by means of inverse vector iter-
ation with Gram-Schmidt ortogonalization.
114 Chapter 5 – VECTOR ITERATION METHODS
C HAPTER 6
SIMILARITY TRANSFORMATION
METHODS

6.1 Introduction
Iterative similarity transformation methods are based on a sequence of similarity transforma-
tions of the original generalized eigenvalue problem in order to reduce this to a simpler form.
The general form of a similarity transformation is defined by the following coordinate transfor-
mation of the eigenmodes

Φ(j) = PΨ(j) (6–1)

where P is the transformation matrix, and Φ(j) and Ψ(j) signify the old and the new coordinates
of the eigenmode. Then, the eigenvalue problem (1-9) may be written


KΦ(j) = λj MΦ(j) ⇒⎪




KPΨ(j) = λj MPΨ(j) ⇒⎪

(6–2)
K̃Ψ(j) = λj M̃Ψ(j) ⎪






K̃ = PT KP , M̃ = PT MP

The eigenvalues λj are unchanged under a similarity transformation, whereas the eigenmodes
are related by (6-1). In the iteration process the transformation matrix P is determined, so this
matrix converge toward the modal matrix Φ = [Φ (1) Φ(2) · · · Φ(n) ]. Hence, after convergence of
the iteration process the eigenmodes are stored column-wise in P = Φ. On condition that the
eigenmodes have been normalized to unit modal mass it then follows from (1-19) and (1-21)
that K̃ = PT KP = Λ, and M̃ = PT MP = I, so the transformed stiffness and mass matrices
become diagonal at convergence, and the eigenvalues are stored in the main diagonal of K̃. By
contrast
 to
 vector iteration methods similarity transformation methods determine all eigen-pairs
λj , Φ(j) , j = 1, . . . , n.

The general format of the similarity iteration algorithm has been summarized in Box 6.1.

— 115 —
116 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

Box 6.1: Iterative similarity transformation algorithm

Let M0 = M, K0 = K and Φ0 = I. Repeat the following items for k = 0, 1, . . .

1. Calculate appropriate transformation matrix Pk at the kth iteration step.

2. Calculate updated transformation matrix and transformed mass and stiffness matrices
Φk+1 = Φk Pk , Mk+1 = PTk Mk Pk , Kk+1 = PTk Kk Pk

After convergence:

k = K∞ , m = M∞
⎡ ⎤
λj 1 0 · · · 0
⎢0 λ 0 ⎥
⎢ j2 · · · ⎥ 1
Λ=⎢ . . . . ⎥ = m−1 k , Φ = Φ(j1 ) Φ(j2 ) · · · Φ(jn ) = Φ∞ m− 2
⎣ .. .
. . . . ⎦
.
0 0 · · · λj n

Orthonormal transformation matrices fulfill, cf. (1-23)

P−1 T
k = Pk (6–3)

For transformation methods operating on the generalized eigenvalue problem, such as the gen-
eral Jacobi iteration method considered in Section 6.2, the transformation matrices P k are not
orthonormal, in which case Mk and Kk converge towards the diagonal matrices m and k as
given by (1-20) and (1-22). Then, PTk Pk = I, and an original SEVP will change into a GEVP
during the iteration process. The eigenvalue matrix Λ and the normalized modal matrix Φ are
1
retrieved
as indicated in Box 6.1, where m− 2 denotes a diagonal matrix with the components
1/ Mj in the main diagonal.

Some similarity transformation algorithms are devised for the special eigenvalue problem, as
is the case for the special Jacobi iteration method in Section 6.1, and the Householder-QR
iteration method in Section 6.3. Hence, application of these methods require an initial similarity
transformation from a GEVP to a SEVP as explained in Section 3.4. This may be achieved by
specifying the transformation matrix of the transformation k = 0 in Box 6.1 as, cf. (3-48)

 T
P0 = S−1 (6–4)

where S fulfills (3-44). Then, M1 = I. If the succeeding similarity transformation matrices are
orthonormal, then all transformed mass matrices become identity matrices as seen by induction
from Mk+1 = PTk Mk Pk = PTk IPk = I. Moreover, Φk+1 is orthonormal at each iteration step,
6.1 Introduction 117

as seen by induction from ΦTk+1 Φk+1 = PTk ΦTk Φk Pk = PTk IPk = I.

Finally, it should be noticed that after convergence the sequence of eigenvalues in the main diag-
onal of Λ and the eigenmodes in Φ is not ordered in ascending magnitude of the corresponding
eigenvalues as indicated in Box 6.1, where the set of indices (j 1 , j2 , . . . , jn ) denotes an arbitrary
permutation of the numbers (1, 2, . . . , n).

Example 6.1: Interchange of rows and columns in GEVP by means of a similarity transformation
Interchange of rows and columns in a matrix may be performed by a similarity transformation. Assume, that the
if the ith row and column are to be interchanged with the jth row and colums. Then the similarity transformation
matrix is given as

⎡ i j ⎤
1 0 ··· 0 ··· 0 ··· 0
⎢ ⎥
⎢0 1 ··· 0 · · · 0 · · · 0⎥
⎢ ⎥
⎢ .. .. . . . . .⎥
⎢. . . .. · · · .. · · · .. ⎥
⎢ ⎥
⎢0 0 ··· 0 · · · 1 · · · 0⎥
⎢ ⎥i (6–5)
P = ⎢. .. . .. . . ⎥
⎢ .. . · · · .. . .. · · · .. ⎥
⎢ ⎥
⎢ ⎥
⎢0 0 ··· 1 · · · 0 · · · 0⎥ j
⎢. .⎥
⎢. .. . . .. ⎥
⎣. . · · · .. · · · .. . .. ⎦
0 0 ··· 0 ··· 0 ··· 1
The rule is that ones are placed at the positions (i, j) and (j, i) of P. Consider the generalized eigenvalue problem
defined by (4-16). It is easily verified that P −1 = PT , so the transformation matrix is orthonormal, cf. (3-4). The
intention is to interchange the 1st row and column with the 2nd row and column, and next the new 2nd row and
column with the 4th row and column. This is achieved by two similarity transformations with the transformation
matrices P1 and P2 , given the following combined transformation matrix obtained as a product of tweo matrices
of the type (6-5)

⎡ ⎤⎡ ⎤ ⎡ ⎤
0 1 0 0 1 0 0 0 0 0 0 1
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢1 0 0 0⎥ ⎢0 0 0 1⎥ ⎢1 0 0 0⎥
P = P1 P2 = ⎢ ⎥⎢ ⎥=⎢ ⎥ (6–6)
⎣0 0 1 0⎦ ⎣0 0 1 0⎦ ⎣0 0 1 0⎦
0 0 0 1 0 1 0 0 0 1 0 0
The transformed stiffness and mass matrices become
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎫
0 0 0 1 0 0 0 0 0 0 0 1 2 0 0 0 ⎪



⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎪

⎢1 0 0 0⎥ ⎢0 2 0 0⎥ ⎢1 0 0 0⎥ ⎢0 1 0 0⎥ ⎪

M̃ = ⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎪

⎣0 0 1 0⎦ ⎣0 0 0 0⎦ ⎣0 0 1 0⎦ ⎣0 0 0 0⎦ ⎪



0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 ⎪

(6–7)
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎡ ⎤⎪

0 0 0 1 2 −1 0 0 0 0 0 1 2 0 −1 −1 ⎪⎪

⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎪

⎢1 0 0 0⎥ ⎢−1 2 −1 0⎥ ⎢1 0 0 0⎥ ⎢ 0 1 −1 0⎥⎪⎪
K̃ = ⎢ ⎥ ⎢ ⎥⎢ ⎥=⎢ ⎥⎪

⎣0 0 1 0⎦ ⎣ 0 −1 2 −1⎦ ⎣0 0 1 0⎦ ⎣−1 −1 2 0 ⎪
⎦⎪



0 1 0 0 0 0 −1 1 0 1 0 0 −1 0 0 2
In the formulation (6-1 the solutions for the eigenmodes in the transformed system as given by (4-21) - (4-23) may
be written as
118 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

⎡ ⎤
1
− 12 0 0
⎢ √22 √ ⎥
⎢ 2
0 0⎥
⎢ 2 2 ⎥
Ψ= ⎢ √ √ ⎥ (6–8)
⎢1 + 2 − 14 + 2
0⎥
⎣4 4 4 1 ⎦
1
4 − 14 0 1

The corresponding eigenmodes of the original system is obtained from, cf. (6-1)

⎡ ⎤⎡ 1 ⎤ ⎡ ⎤
0 0 0 1 − 21 0 0 1
− 41 0 1
√2 4
⎥⎢ ⎥ ⎢ 1 ⎥


⎢1 0 0 0⎥ ⎢ 2 2
0 0⎥ ⎢ 2 − 21 0 0⎥
Φ = PΨ = ⎢ ⎥⎢⎢
2√ 2 √ ⎥=


⎢ 1 √2 √ ⎥
⎥ (6–9)
⎣0 0 1 0⎦ ⎣ 14 + 42 − 14 + 2
1 0⎦ ⎣4 + 4 − 14 + 2
1 0⎦
4 √ √ 4
0 1 0 0 1
− 41 0 1 2 2
0 0
4 2 2

6.2 Special Jacobi Iteration


The special Jacobi iteration algorithm operates on the special eigenvalue problem, so M = I
at the outset. The idea is to ensure during the kth transformation that the off-diagonal compo-
nent Kij,k , entering the ith and jth row and column of K k , becomes zero after the similarity
transformation. The transformation matrix is given as

⎡ i j ⎤
1 0 ··· 0 0 ··· 0 0 ··· 0
⎢ ⎥
⎢0 1 ··· 0 0 ··· 0 0 ··· 0⎥
⎢. .. . . .. .. .. .. .. ⎥
⎢ .. . . ··· . ··· .⎥
⎢ . . . ⎥
⎢ ⎥
⎢0 0 · · · cos θ 0 · · · − sin θ 0 ··· 0⎥ i
⎢ ⎥
⎢0 0 ··· 0 1 ··· 0 0 ··· 0⎥
Pk = ⎢
⎢ .. .. .. .. . . .. .. .. ⎥

(6–10)
⎢. . ··· . . . . . ··· .⎥
⎢ ⎥
⎢0 0 · · · sin θ 0 ··· cos θ 0 · · · 0⎥ j
⎢ ⎥
⎢0 0 ··· 0 0 ··· 0 1 · · · 0⎥
⎢ ⎥
⎢ .. .. .. .. .. .. . . .. ⎥
⎣. . ··· . . ··· . . . .⎦
0 0 ··· 0 0 ··· 0 0 ··· 1

Basically, (6-10) is a identity matrix, where only the components P ii , Pij , Pji and Pjj are differ-
ing. Obviously, (6-10) is orthonormal. The components of the updated similarity transformation
matrix Φk+1 = Φk Pk and the transformed stiffness matrix K k+1 = PTk Kk Pk become


Φli,k+1 = Φli,k cos θ + Φlj,k sin θ , l = 1, . . . , n
(6–11)
Φlj,k+1 = Φlj,k cos θ − Φli,k sin θ , l = 1, . . . , n
6.2 Special Jacobi Iteration 119



⎪ Kii,k+1 = Kii,k cos2 θ + Kjj,k sin2 θ + 2Kij,k cos θ sin θ



⎨ Kjj,k+1 = Kjj,k cos θ + Kii,k sin θ − 2Kij,k cos θ sin θ
⎪ 2 2

Kij,k+1 = Kjj,k − Kii,k cos θ sin θ + Kij,k cos2 θ − sin2 θ (6–12)


⎪ Kli,k+1 = Kil,k+1 = Kli,k cos θ + Klj,k sin θ , l = i, j




Klj,k+1 = Kjl,k+1 = Klj,k cos θ − Kli,k sin θ , l = i, j

The remaining components of Φk+1 and Kk+1 are identical to those of Φk and Kk . Hence, only
the ith and jth row and column of Kk are affected by the transformation.

Box 6.2: Special Jacobi iteration algorithm

Let M0 = I, K0 = K and Φ0 = I. Repeat the following items for the sweeps m =


1, 2, . . .

1. Specify omission criteria εm in the mth sweep.

2. Check, if the component Kij,k in the ith row and jth column of K k fulfills the criteria

2
Kij,k
< εm
Kii,k Kjj,k

3. If the criteria is fulfilled, then skip to the next component in the sweep. Else perform
the following calculations

(a.) Calculate the transformation angle θ from (6-13), and then the transformation
matrix Pk as given by (6-10).
(b.) Calculate the components of the updated similarity transformation matrix
Φk+1 = Φk Pk , and the transformed stiffness matrix K k+1 = PTk Kk Pk from
(6-11) and (6-12). Notice that k after the mth sweep is of the magnitude
1
2
(n − 1)n · m.

After convergence:
⎡ ⎤
λj 1 0 · · · 0
⎢0 λ 0 ⎥
⎢ j2 · · · ⎥
Φ∞ = Φ = [Φ(j1 ) Φ(j2 ) · · · Φ(jn ) ] , K∞ =Λ=⎢ . . . . ⎥
⎣ .. .. . . .. ⎦
0 0 · · · λj n

Next, the angle θ is determined, so the off-diagonal component K ij,k+1 becomes equal to zero
120 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

1 
Kij,k+1 = − Kii,k − Kjj,k sin 2θ + Kij,k cos 2θ = 0 ⇒
2
⎧  
⎪ 1 2Kij,k (6–13)
⎨ θ = arctan , Kii,k = Kjj,k
2 Kii,k − Kjj,k

⎩θ = π , Kii,k = Kjj,k
4
Notice, that even though Kij,k+1 = 0 after the transformation, a subsequent transformation
involving either the ith or jth row or column may reintroduce a non-zero value at this position.
Optimally, Kij,k should be selected as the numerically largest off-diagonal component in K k .
However, in practice the iteration process is often performed in so-called sweeps, where all
1
2
(n − 1)n components above the main diagonal in turn are selected as the critical element to
become zero after the transformation. In this case the method is combined with a criteria for
omission of the similarity transformation, in case the component is numerically small. The
transformation is omitted, if


2
Kij,k
< εm (6–14)
Kii,k Kjj,k

where εm is the omission value in the mth sweep.

Finally, it should be noticed that if K 0 has a banded structure, so non-zero components are
grouped in a band around the main diagonal, the banded structure is not preserved during the
transformation process as seen from Example 6.1, where the initial matrix K 0 is on a three di-
agonal form, whereas the transformed matrix K1 is full, see (6-16) below.

The special Jacobi iteration algorithm can be summarized as indicated in Box 6.2.

Example 6.2: Special Jacobi iteration

Given a special eigenvalue problem with the stiffness matrix

⎡ ⎤ ⎡ ⎤
2 −1 0 1 0 0
⎢ ⎥ ⎢ ⎥
K = K0 = ⎣−1 4 −1⎦ , Φ0 = ⎣0 1 0⎦ (6–15)
0 −1 2 0 0 1
6.2 Special Jacobi Iteration 121

In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
⎧   

⎪ 1 2 · (−1) cos θ = 0.9239

⎪ θ = arctan = 0.3927 ⇒

⎪ 2 2 − 4

⎪ sin θ = 0.3827



⎪ ⎡ ⎤



⎪ 0.9239 −0.3827 0

⎨ ⎢ ⎥
P0 = ⎣0.3827 0.9239 0⎦

⎪ 0 0 1





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.3827 −0.3827

⎪ 0.9239 0 1.5858 0

⎪ ⎢ ⎥ T ⎢ ⎥

⎪ Φ1 = Φ0 P0 = ⎣0.3827 0.9239 0⎦ , K1 = P0 K0 P0 = ⎣ 0 4.4142 −0.9239⎦


0 0 1 −0.3827 −0.9239 2
(6–16)

Next, the calculations are performed for (i, j) = (1, 3) :


⎧   

⎪ 1 2 · (−0.3827) cos θ = 0.8591

⎪ θ = arctan = 0.5374 ⇒

⎪ 2 1.5858 − 2

⎪ sin θ = 0.5119



⎪ ⎡ ⎤



⎪ 0.8591 0 −0.5119

⎨ ⎢ ⎥
P1 = ⎣ 0 1 0 ⎦

⎪ 0.5119 0 0.8591





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.3827 −0.4729 −0.4729

⎪ 0.7937 1.3578 0

⎪ ⎢ ⎥ T ⎢ ⎥

⎪ Φ2 = Φ1 P1 = ⎣0.3287 0.9238 −0.1959⎦ , K2 = P1 K1 P1 = ⎣−0.4729 4.4142 −0.7937⎦


0.5119 0 0.8591 0 −0.7937 2.2280
(6–17)

Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
⎧   

⎪ 1 2 · (−0.7937) cos θ = 0.9511

⎪ θ = arctan = −0.3140 ⇒

⎪ 4.4142 − 2.2280 sin θ = −0.3089


2



⎪ ⎡ ⎤



⎪ 1 0 0

⎨ ⎢ ⎥
P2 = ⎣ 0 0.9511 0.3089⎦

⎪ 0 −0.3089 0.9511





⎪ ⎡ ⎤ ⎡ ⎤

⎪ 0.7937 −0.2179 −0.5680 1.3578 −0.4498 −0.1461



⎪ ⎢ ⎥ T ⎢ ⎥

⎪ Φ3 = Φ2 P2 = ⎣0.3287 0.9392 0.0991⎦ , K3 = P2 K2 P2 = ⎣−0.4498 4.6720 0 ⎦


0.5119 −0.2653 0.8171 −0.1461 0 1.9703
(6–18)

Φ3 and K3 represents the estimates of the modal matrix Φ and Λ after the 1st sweep. As seen the K 12,1 = 0,
whereas K12,2 = −0.4729. This is in agreement with the statement above, that off-diagonal components set to
zero in one iteration, may attain non-zero values in a later iteration. Comparison of K 0 to K3 shows that the
numerical maximum off-diagonal component has decreased from | − 1| = 1 to | − 0.4498| after the 1st sweep.
Hence, the algorithm is converging.

At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the eigenvalues
122 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

⎧ ⎡ ⎤ ⎡ ⎤

⎪ 0.6276 −0.3258 −0.7071 1.2680 0.0039 −0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ Φ6 = ⎣0.4607 0.8876 0.0000⎦ , K6 = ⎣ 0.0039 4.7320 0 ⎦




⎨ 0.6276 −0.3258 0.7071 −0.0000 0 2.0000
⎡ ⎤ ⎡ ⎤ (6–19)



⎪ 0.6280 −0.3251 −0.7071 1.2679 −0.0000 −0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ Φ9 = ⎣0.4597 0.0000⎦ , K9 = ⎣−0.0000 ⎦


0.8881 4.7321 0

0.6280 −0.3251 0.7071 −0.0000 0 2.0000

As seen the eigenmodes are stored column-wise in Φ according to the permutation (j 1 , j2 , j3 ) =


(1, 3, 2).

6.3 General Jacobi Iteration


The general Jacobi iteration method operates on the generalized eigenvalue problem, i.e. M =
I. The idea of the transformation is to ensure that during the kth transformation the off-diagonal
component Mij,k and Kij,k , entering the ith and jth row and column of M k and Kk , simultane-
ous become zero after the similarity transformation.

 
β  
  xj 1 cos θ
− sin θ
1 sin θ
cos θ
 
1
α
xi
1

Fig. 6–1 Projection of ith and jth column vectors of similarity transformation matrix in the (x i , xj )-plane.

The transformation matrix is given as


6.3 General Jacobi Iteration 123

⎡ i j ⎤
1 0 ··· 0 0 ··· 0 0 ···
0
⎢ ⎥
⎢0 1 ··· 0 0 ··· 0 0 ···
0⎥
⎢. .. . . .. .. .. ⎥
.. ..
⎢ .. . . . ··· .⎥
. . ···
⎢ . ⎥
⎢ ⎥
⎢0 0 ··· 1 0 ··· β 0 ···
0⎥ i
⎢ ⎥
⎢0 0 ··· 0 1 ··· 0⎥
0 0 ···
Pk = ⎢
⎢ .. .. .. .. . . .. ⎥
.. .. ⎥
(6–20)
⎢. . ··· . . . . . ···
.⎥
⎢ ⎥
⎢0 0 · · · α 0 · · · 1 0 · · · 0⎥ j
⎢ ⎥
⎢0 0 · · · 0 0 · · · 0 1 · · · 0⎥ ⎥

⎢ .. .. . . . . . .⎥
⎣. . · · · .. .. · · · .. .. . . .. ⎦
0 0 ··· 0 0 ··· 0 0 ··· 1
Because we have to specify requirements for both Mij,k+1 and Kij,k+1, we need two free pa-
rameters α and β in the transformation matrix, where only the angle θ appears in (6-10). As a
consequence (6-20) is not orthonormal. Actually, the ith and jth column vectors neither have
the length 1 nor are mutual orthogonal, by contrast to the corresponding vectors in (6-10), see
Fig. 6-1. The components of the updated similarity transformation matrix Φ k+1 = Φk Pk
and the transformed mass and stiffness matrices, M k+1 = PTk Mk Pk and Kk+1 = PTk Kk Pk ,
become


Φli,k+1 = Φli,k + α Φlj,k , l = 1, . . . , n
(6–21)
Φlj,k+1 = Φlj,k + β Φli,k , l = 1, . . . , n



⎪ Mii,k+1 = Mii,k + α2 Mjj,k + 2αMij,k



⎪ 2
⎨ Mjj,k+1 = Mjj,k + β Mii,k + 2βMij,k
 
Mij,k+1 = βMii,k + αMjj,k + Mij,k 1 + αβ (6–22)



⎪ Mli,k+1 = Mil,k+1 = Mli,k + αMlj,k , l = i, j



Mlj,k+1 = Mjl,k+1 = Mlj,k + βMli,k , l = i, j



⎪ Kii,k+1 = Kii,k + α2 Kjj,k + 2αKij,k



⎪ 2
⎨ Kjj,k+1 = Kjj,k + β Kii,k + 2βKij,k
 
Kij,k+1 = βKii,k + αKjj,k + Kij,k 1 + αβ (6–23)



⎪ Kli,k+1 = Kil,k+1 = Kli,k + αKlj,k , l = i, j



Klj,k+1 = Kjl,k+1 = Klj,k + βKli,k , l = i, j
The remaining components of Φk+1, Mk+1 and Kk+1 are identical to those of Φk , Mk and Kk .
Hence, only the ith and the jth row and columns of K k and Mk are affected by the transforma-
tion.
124 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

Next, the parameters α and β are determined, so the off-diagonal components M ij,k+1 and
Kij,k+1 become equal to zero

  ⎫
Mij,k+1 = βMii,k + αMjj,k + Mij,k 1 + αβ = 0 ⎬
  (6–24)
Kij,k+1 = βKii,k + αKjj,k + Kij,k 1 + αβ = 0 ⎭

The solution of (6-24) becomes, see Box 6.3

& . ' ⎫
1 1 1 ⎪
a ⎪
α= − + ab , β = − α⎪

a 2 4 b ⎪







Kjj,k Mij,k − Mjj,k Kij,k , if Kii,k Mjj,k = Mii,k Kjj,k
a= ⎪
Kii,k Mjj,k − Mii,k Kjj,k ⎪



Kii,k Mij,k − Mii,k Kij,k ⎪
⎪ (6–25)


b= ⎪

Kii,k Mjj,k − Mii,k Kjj,k ⎭


Kii,k Mij,k − Mii,k Kij,k 1
α= , β=− , if Kii,k Mjj,k = Mii,k Kjj,k
Kjj,k Mij,k − Mjj,k Kij,k α
6.3 General Jacobi Iteration 125

Box 6.3: Proof of equation (6-25)

From (6-19) follows

βKii,k + αKjj,k Kij,k Kjj,k Mij,k − Mjj,k Kij,k


= ⇒ β=− α (6–26)
βMii,k + αMjj,k Mij,k Kii,k Mij,k − Mii,k Kij,k

Elimination of β in the 1st equation in (6-24) by means of (6-26) provides the following
quadratic equation in α

   
Mij,k Kjj,k Mij,k − Mjj,k Kij,k α2 − Mij,k Kii,k Mjj,k − Mii,k Kjj,k α−
 
Mij,k Kii,k Mij,k − Mii,k Kij,k = 0 (6–27)

If Kii,k Mjj,k = Mii,k Kjj,k the coefficient in front of α cancels. Then, in combination to
(6-26) the following solutions are obtained for α and β


Kii,k Mij,k − Mii,k Kij,k 1
α=± , β=− (6–28)
Kjj,k Mij,k − Mjj,k Kij,k α

If Kii,k Mjj,k = Mii,k Kjj,k solutions of the quadratic equation for α in combination to
(6-26) provides

& . '
1 1 1 a
α= ± + ab , β=− α (6–29)
a 2 4 b

where a and b are as given in (6-25). Both sign combinations in (6-28) and (6-29) will do.

The transformations are performed in sweeps as for the special Jacobi method. In this case the
criteria for omitting a transformation during the mth sweep may be formulated as


2 2
Kij,k Mij,k
+ < εm (6–30)
Kii,k Kjj,k Mii,k Mjj,k

where εm is the omission value in the mth sweep.

The general Jacobi iteration algorithm can be summarized as indicated in Box 6.4.
126 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

Box 6.4: General Jacobi iteration algorithm

Let M0 = M, K0 = K and Φ0 = I. Repeat the following items for the sweeps m =


1, 2, . . .

1. Specify omission criteria εm in the mth sweep.

2. Check, if the components Mij,k and Kij,k in the ith row and jth column of Mk Kk
fulfill the criteria

2 2
Kij,k Mij,k
+ < εm
Kii,k Kjj,k Mii,k Mjj,k

3. If the criteria is fulfilled, then skip to the next component in the sweep. Else perform
the following calculations

(a.) Calculate the parameters α and β as given by (6-29), and then the transforma-
tion matrix Pk as given by (6-20).
(b.) Calculate the components of the updated similarity transformation matrix
Φk+1 = Φk Pk , and the transformed mass and stiffness matrices M k+1 =
PTk Mk Pk and Kk+1 = PTk Kk Pk from (6-21), (6-22) and (6-23). Notice that k
after the mth sweep is of the magnitude 12 (n − 1)n · m.

After convergence:

k = K∞ , m = M∞
⎡ ⎤
λj 1 0 · · · 0
⎢0 λ 0 ⎥
⎢ j2 · · · ⎥ 1
Λ=⎢ . .. . . .. ⎥ = m−1 k , Φ = [Φ(j1 ) Φ(j2 ) · · · Φ(jn ) ] = Φ∞ m− 2
⎣ .. . . . ⎦
0 0 · · · λj n

Example 6.3: General Jacobi iteration


Given a generalized eigenvalue problem with the mass and stiffness matrices
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.5 0.5 0 2 −1 0 1 0 0
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
M = M0 = ⎣0.5 1 0.5⎦ , K = K0 = ⎣−1 4 −1⎦ , Φ0 = ⎣0 1 0⎦ (6–31)
0 0.5 1 0 −1 2 0 0 1
6.3 General Jacobi Iteration 127

In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
⎧ ⎧ 

⎪ ⎪
⎪ 2 · 0.5 − 0.5 · (−1)

⎪ ⎪
⎨α =


⎪ 4 · 0.5 − 1 · (−1)
= 0.7071

⎪ NB : K11,0 M22,0 = K22,0 M11,0

⎪ ⎪

⎪ ⎪


⎪ ⎩β = − 1 = −1.4142

⎪ 0.7071



⎪ ⎡ ⎤ ⎡ ⎤

⎨ 1 −1.4142 0 1 −1.4142 0
⎢ ⎥ ⎢ ⎥

⎪ P0 = ⎣0.7071 1 0⎦ , Φ1 = Φ0 P0 = ⎣0.7071 1 0⎦



⎪ 0 0 1 0 0 1





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.7071

⎪ 1.7071 0 0.3536 2.5858 0

⎪ ⎢ ⎥ ⎢ ⎥
⎪ M1 = PT0 M0 P0 = ⎣ 0
⎪ 0.5858
T
0.5 ⎦ , K1 = P0 K0 P0 = ⎣ 0 10.8284 −1 ⎦


0.3536 0.5 1 −0.7071 −1 2
(6–32)

Next, the calculations are performed for (i, j) = (1, 3) :

⎧ ⎫

⎪ 2 · 0.3536 − 1 · (−0.7071) ⎪
⎪ 

⎪ a = = −1.7071⎬

⎪ 2.5858 · 1 − 1.7071 · 2 α = 0.9664

⎪ ⇒

⎪ 2.5858 · 0.3536 − 1.7071 · (−0.7071) ⎪ β = −0.6443


⎪ b= = −2.5607⎪


⎪ 2.5858 · 1 − 1.7071 · 2



⎪ ⎡ ⎤ ⎡ ⎤

⎨ 1 0 −0.6443 1 −1.4142 −0.6443
⎢ ⎥ ⎢ ⎥
⎪ P1 = ⎣ 0 1 0 ⎦ , Φ2 = Φ1 P1 = ⎣0.7071 1 −0.4556⎦



⎪ 0.9664 0 1 0.9664 0 1





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.9664

⎪ 3.3243 0.4832 0 3.0869 0

⎪ ⎢ ⎥ ⎢ ⎥

⎪ M2 = PT1 M1 P1 = ⎣0.4832 0.5858 0.5 ⎦ , K2 = PT1 K1 P1 = ⎣−0.9664 10.8284 −1 ⎦


0 0.5 1.2530 0 −1 3.9844
(6–33)

Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
⎧ ⎫

⎪ 3.9844 · 0.5 − 1.2530 · (−1) ⎪
⎪ 

⎪ a = = 0.2889⎬

⎪ 10.8284 · 1.2530 − 0.5858 · 3.9844 α = −0.4702

⎪ ⇒

⎪ 10.8284 · 0.5 − 0.5858 · (−1) ⎪
= 0.5341⎪
⎪ ⎭ β = 0.2543

⎪ b=

⎪ 10.8284 · 1.2530 − 0.5858 · 3.9844



⎪ ⎡ ⎤ ⎡ ⎤

⎨ 1 0 0 1 −1.1113 −1.0039
⎢ ⎥ ⎢ ⎥
⎪ P2 = ⎣0 1 0.2543⎦ , Φ3 = Φ2 P2 = ⎣0.7071 1.2142 −0.2012⎦



⎪ 0 −0.4702 1 0.9664 −0.4702 1





⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.9664 −0.2458

⎪ 3.3243 0.4832 0.1229 3.0869

⎪ ⎢ ⎥ ⎢ ⎥

⎪ M3 = PT2 M2 P2 = ⎣0.4832 0.3926 T
0 ⎦ , K3 = P2 K2 P2 = ⎣−0.9664 12.6498 0 ⎦


0.1229 0 1.5452 −0.2458 0 4.1761
(6–34)

At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the transformed
mass and stiffness matrices
128 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

⎧ ⎡ ⎤

⎪ 0.7494 −1.2825 −1.0742

⎪ ⎢ ⎥

⎪ Φ6 = ⎣0.8195 1.0999 −0.2865⎦



⎪ 1.0376 −0.6084

⎪ 0.9213



⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.0024 3.0336 0.0048 −0.0000

⎪ 3.4931 0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ M6 = ⎣−0.0024 0.3225 0 ⎦ , K6 = ⎣ 0.0048 13.029 0 ⎦



⎪ 0.0000 0 1.5517 −0.0000 0 4.2464


(6–35)

⎪ ⎡ ⎤



⎪ 0.7501 −1.2820 −1.0742

⎪ ⎢ ⎥

⎪ Φ9 = ⎣0.8189 1.1005 −0.2865⎦





⎪ 1.0379 −0.6076 0.9213



⎪ ⎡ ⎤ ⎡ ⎤



⎪ 3.4932 −0.0000 0.0000 3.0336 0.0000 −0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ M9 = ⎣−0.0000 0.3225 0 ⎦ , K9 = ⎣ 0.0000 13.029 0 ⎦


0.0000 0 1.5517 −0.0000 0 4.2464

Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes are next
retrieved by the following calculations, cf. Box. 6.4

⎧ ⎡ ⎤ ⎡ ⎤

⎪ 3.4932 −0.0000 0.0000 0.5350 0 0

⎪ ⎢ ⎥ 1 ⎢ ⎥

⎪ m = M9 = ⎣−0.0000 0.3225 0 ⎦ , m− 2 = ⎣ 0 1.7608 0 ⎦ ⇒





⎪ 0.0000 0 1.5517 0 0 0.8028



⎪ ⎡ ⎤ ⎡ ⎤

⎪ 0.0000 −0.0000
⎨ λ1 0 0 0.8684
⎢ ⎥ ⎢ ⎥
Λ = ⎣ 0 λ3 0 ⎦ = M−1 K 9 = ⎣ 0.0000 40.395 −0.0000⎦ (6–36)


9

⎪ 0 0 λ −0.0000 −0.0000 2.7365


2

⎪ ⎡ ⎤



⎪ 0.4013 −2.2573 −0.8623

⎪ ⎢ ⎥


1
Φ = Φ(1) Φ(3) Φ(2) = Φ9 m− 2 = ⎣0.4381 1.9378 −0.2300⎦



0.5553 −1.0698 0.7396

The reader should verify that the solution matrices within the indicated accuracy fulfill Φ T MΦ =
I and ΦT KΦ = Λ.

6.4 Householder Reduction


The Householder reduction method operates on the special eigenvalue problem . Hence, a pre-
liminary similarity transformation of a GEVP to a SEVP must be performed as explained in
Section 3.4.

The Householder method reduces a symmetric matrix K 1 to three diagonal form by totally n−2
consecutive similarity transformations. After the (n − 2)th transformation the stiffness matrix
has the form
6.4 Householder Reduction 129

⎡ ⎤
α1 β1 0 ··· 0 0
⎢ ⎥
⎢ β1 α2 β2 ··· 0 0 ⎥
⎢ ⎥
⎢ 0 β2 α3 ··· 0 0 ⎥
Kn−1 =⎢
⎢ .. .. .. .. .. .. ⎥
⎥ (6–37)
⎢. . . . . . ⎥
⎢ ⎥
⎣0 0 0 · · · αn−1 βn−1 ⎦
0 0 0 · · · βn−1 αn

During the reduction process the numbers α 1 , . . . , αn and β1 , . . . , βn−1 , as well as the sequence
of transformation matrices P1 , . . . , Pn−2 are determined. Since all transformation matrices be-
come orthonormal all transformed mass matrices remain unit matrices.

After completing the Householder reduction process the special eigenvalue problem with the
three diagonal matrix Kn−1 must be solved by some kind of iteration method, which preserves
the three diagonal structure of the reduced system matrix, and benefits from this reduced struc-
ture in order to improve the calculation time. As mentioned in Section 6.2 this requirement
rules out the special Jacobi iteration method. Since, the inverse of a three diagonal matrix is
full, inverse vector iteration with Gram-Schmidt orthogonalization must also be avoided. Of the
methods discussed hitherto only forward vector iteration with Gram-Schmidt orthogonalization
meets the requirement. As wee shall see the requirements are also met by the QR iteration
method to be discussed in Section 6.5. Finally, an initial Householder reduction is favorable in
relation to characteristic polynomial iteration methods discussed in Section 7.4.

The transformation matrix during the kth similarity transformation is given as follows

Pk = I − 2wk wkT , |wk | = 1 (6–38)

wk denotes a unit column vector to be determined below. Hence, w kT wk = 1.

Obviously, Pk is symmetric, i.e. Pk = PTk . Moreover, Pk is orthonormal as seen from the


following derivation

Pk PTk = I − 2wk wkT I − 2wk wkT =


 
I − 2wk wkT − 2wk wkT + 4 wkT wk wk wkT = I ⇒
PTk = P−1
k (6–39)

As mentioned, this means that the mass matrix remains an identity matrix during the House-
holder similarity transformations, because this is ensured in the initial transformation from a
GEVP to a SEVP, as explained in the remarks subsequent to (6-4).
130 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

wk
−2wk (wkT x)
l
wkT x

x
wkT x

Pk x
Fig. 6–2 Geometrical interpretation of the Householder transformation.

Consider a given column vector x. Then,


 
Pk x = I − 2wk wkT x = x − 2 wkT x wk (6–40)

Notice that wkT x is a scalar. The transformed vector, Pk x, may be interpreted as a reflection of
x in the line l, which is orthogonal to the vector w k and placed in the plane spanned by x and
wk as illustrated in Fig. 6-2.

At the kth transformation the applied unit vector w k is taken on the following form

⎡ ⎤
0
⎢ . ⎥
⎢ .. ⎥
⎢ ⎥ ⎡ ⎤/
⎢ ⎥ k rows
⎢ 0 ⎥ 0

wk = ⎢ ⎥ ⎣ ⎦
⎥= / (6–41)
⎢wk+1 ⎥ w̄k n − k rows
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
wn
where

wkT wk = w̄kT w̄k = wk+1


2
+ · · · + wn2 = 1 (6–42)
Then, the transformation matrix may be written on the following matrix form

k n−k
columns columns
0123 0123
⎡ ⎤/ (6–43)
Īn−k 0 k rows
Pk = ⎣ ⎦ , P̄k = Īk − 2w̄k w̄kT
/
0 P̄k n − k rows

where Īk denotes a unit matrix of dimension (n − k) × (n − k).


6.4 Householder Reduction 131

In order to determine the sub-vector w̄k defining the transformation matrix, the stiffness matrix
before the kth similarity transformation is considered, at which stage the stiffness matrix has
been reduced to three diagonal form down to and including the (k − 1)th row and column.
Hence, the stiffness matrix has the structure
n−k
columns
k 0123
⎡ ⎤
α1 β1 0 · · · 0 0 0
⎢ ⎥
⎢ β1 α2 β2 · · · 0 0 0 ⎥
⎢ ⎥
⎢ 0 β2 α3 · · · 0 0 0 ⎥
⎢ ⎥ (6–44)
⎢ .. .. .. . . .. .. .. ⎥
⎢. . . . . . . ⎥
Kk = ⎢ ⎢ 0 0 0 · · · αk−1 βk−1


⎢ 0 ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 0 0 · · · βk−1 Kkk kk ⎥
⎢ ⎥k
⎣ ⎦4
0 0 0 ··· 0 kk T
K̄k n − k rows

kk is a row vector of the dimension (n − k), and K̄k is a symmetric matrix of the dimension
(n − k) × (n − k) defined as

kk = [Kk k+1 Kk k+2 · · · Kkn ] (6–45)


⎡ ⎤
Kk+1 k+1 Kk+1 k+2 · · · Kk+1 n
⎢K · · · Kk+2 n ⎥
⎢ k+2 k+1 Kk+2 k+2 ⎥
K̄k = ⎢ .. .. .. .. ⎥ (6–46)
⎣ . . . . ⎦
Kn k+1 Kn k+2 · · · Knn

Then, with the transformation matrix given by (6-43) the stiffness matrix after the kth transfor-
mation becomes
n−k
columns
k 0123
⎡ ⎤
α1 β1 0 · · · 0 0 0
⎢ ⎥
⎢ β1 α2 β2 · · · 0 0 0 ⎥
⎢ ⎥
⎢ 0 β2 α3 · · · 0 0 0 ⎥
⎢ ⎥
⎢ .. .. .. . . .. .. .. ⎥
⎢ . . . . . . . ⎥
T
Kk+1 = Pk Kk Pk = ⎢ ⎢ ⎥
0 0 0 · · · α β 0 ⎥
⎢ k−1 k−1 ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 0 0 · · · βk−1 Kkk k P̄ ⎥k
⎢ k k ⎥
⎣ ⎦4
0 0 0 ··· 0 T T
P̄k kk T
P̄k K̄k P̄k n − k rows
(6–47)
132 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

where

αk = Kkk (6–48)


 
kk P̄k = kk Īk − 2w̄k w̄kT = kk − 2 kk w̄k w̄kT (6–49)

 
P̄Tk K̄k P̄k = K̄k − 2w̄k w̄kT K̄k − 2K̄k w̄k w̄kT + 4 w̄kT K̄k w̄k w̄k w̄kT (6–50)

Since, kk is a row vector and w̄k is a column vector, kk w̄k is a scalar. Similarly, w̄kT K̄k w̄k
becomes a scalar.

If the kth row and column in (6-45) should be on a three-diagonal form, it is required that

⎡ ⎤
1
⎢0⎥
⎢ ⎥
P̄Tk kTk = βk ēk , ēk = ⎢ . ⎥ (6–51)
⎣ .. ⎦
0

where ēk is a unit column vector of dimension (n−k). The transformation matrix is symmetric,
so P̄Tk kTk = P̄k kTk . Moreover, P̄k kTk is a reflection of the vector kTk in the line l as depicted in
Fig. 6-2, and hence has the length |kTk |. Hence, it follows that βk should be selected as

βk = ±|kk | (6–52)

Then it follows from (6-49) that

 
kTk − 2 kk w̄k w̄k = ±|kk |ēk ⇒

w̄k = a kTk ∓ |kk |ēk (6–53)


 
where it is noticed that 2 kTk w̄k is a scalar, which may be absorbed in the coefficient a. a is
determined so the vector w̄k is of unit length.
6.4 Householder Reduction 133

Box 6.5: Householder reduction algorithm


 T
Transform the GEVP to a SEVP by the similarity transformation matrix P = S−1 ,
where S is a solution to M = SST , and define the initial transformation and stiffness
matrices as
 T  T
K1 = S−1 K S−1 , Φ1 = S−1
Next, repeat the following items for k = 1, . . . , n − 2

1. Calculate the similarity transformation matrix P k at the kth similarity transformation


by (6-43), (6-55).

2. Calculate updated transformation and stiffness matrices from (6-47), (6-57)


Φk+1 = Φk Pk , Kk+1 = PTk Kk Pk

After completion of the reduction process the following standard eigenvalue problem is
solved by some iteration method

Kn−1 V = VΛ
Λ is the diagonal eigenvalue matrix of the original GEVP, and V is the orthonormal
eigenvector matrix of the three diagonal matrix K n−1 . Then, the eigenmodes normalized
to unit modal mass of the original GEVP are retrieved from the matrix product

Φ = Φn−1 V

Both sign combinations in (6-52) and (6-53) will do. However, in order to prevent numerical
problems of the algorithm in the case, where kk Kk k+1 ēk the following choice of sign in the
solutions for βk and w̄k should be preferred


βk = −sign(Kk k+1 |kk | (6–54)

kT + sign(Kk k+1 )|kk |ēk


w̄k = ++ kT + (6–55)
kk + sign(Kk k+1 )|kk |ēk +
The updated transformation matrix before the kth transformation is partitioned as follows

k n−k
columns columns
0123 0123
⎡ ⎤/ (6–56)
Φ11 Φ12 k rows
Φk = ⎣ ⎦
/
Φ21 Φ22 n − k rows
134 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

With the transformation matrix as given by (6-43) the transformation matrix after the kth trans-
formation becomes
k n−k
columns columns
0123 0123
⎡ ⎤/ (6–57)
Φ11 Φ12 P̄k k rows
Φk+1 = ⎣ ⎦
/
Φ21 Φ22 P̄k n − k rows
Finally, it should be noticed that alternative algorithms for reduction to three diagonal form have
been indicated by Givens1 and Lanczos.2
Example 6.4: Householder reduction

Given a generalized eigenvalue problem with the mass and stiffness matrices given by (5-74). The similarity
transformation matrix transforming from a GEVP to a SEVP becomes
⎡√ ⎤ ⎡√ ⎤
2
2 0 0 0 0 0 0
⎢ √ ⎥
2 √
1 ⎢ 0 2 0 0⎥  −1 T ⎢
⎢ 0 2
0 0⎥

S = M2 = ⎢ ⎥ ⇒ S =⎢ 2 ⎥ (6–58)
⎣ 0 0 1 0⎦ ⎣ 0 0 1 0⎦
0 0 0 1 0 0 0 1
Then, the stiffness matrix and updated transformation matrix before the 1st Householder similarity transformation
becomes, cf. (3-47), (3-48)
⎡√ ⎤⎡ ⎤ ⎡√ ⎤ ⎫
2
0 0 0 5 −4 1 0 2
0 0 0 ⎪

2 √ 2 √ ⎪
 T ⎢⎢0 2
0
⎥⎢
0⎥ ⎢−4 6 −4
⎥⎢
1⎥ ⎢ 0 2
0

0⎥



K1 = S−1 K S−1 = ⎢ 2 ⎥⎢ ⎥⎢ 2 ⎥ ⇒⎪


⎣0 0 1 0⎦ ⎣ 1 −4 6 −4⎦ ⎣ 0 0 1 0⎦ ⎪



0 0 0 1 0 1 −4 5 0 0 0 1 ⎪



⎡ ⎤ ⎪

√ ⎪

5
−2 2
0 ⎪


2 √2 √ ⎪

2⎥ ⎬
⎢−2 3 −2 2 2 ⎥
K1 = ⎢ √ 2 √ ⎥ (6–59)
⎣ 2 −2 2 6 −4⎦ ⎪

√ ⎪

0 2
−4 5 ⎪

2 ⎪



⎡√ ⎤ ⎪

2
0 0 0 ⎪

2 √ ⎪

⎢ ⎥ ⎪

⎢0 2
0 0⎥ ⎪

Φ1 = ⎢ 2 ⎥ ⎪

⎣0 0 1 0⎦ ⎪



0 0 0 1
At the Householder transformation k = 1 one has

⎪ 5

⎪ α1 =

⎨ 2
 √  √ (6–60)

⎪ 2 3 2


⎩ k1 = −2 2
0 , |k1 | =
2
1
W. Givens: Numerical Computation of the Characteristic Values of a Real Symmetric Matrix. Oak Ridge
National Laboratory, ORNL - 1574, 1954.
2
C. Lanczos: An Iterative Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral
Operators. Journal of Research of the National Bureau of Standards, 45(4), 1950, 255-282.
6.4 Householder Reduction 135

Then, cf. (6-43), (6-54) and (6-55)

⎧ √ √

⎪ β1 = −sign(−2)
3 2 3 2

⎪ = = 2.1213

⎪ 2 2



⎪ ⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ √ ⎤ ⎡ ⎤

⎪ −2 √ 1 −2 − 3 2
−0.9856

⎪ √ √
⎨ w̄1 = a ⎜ ⎢ ⎥ 3 2 ⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥
2
⎪ ⇒ w̄1 = ⎣ 0.1691⎦
⎝⎣ 22 ⎦ + sign(−2) ⎣0⎦⎠ = a ⎣ 2
2

2 (6–61)

⎪ 0 0 0 0



⎪ ⎡ ⎤



⎪ −0.9828 0.3333 0

⎪ ⎢ ⎥

⎪ P̄1 = Ī1 − 2w̄1 w̄1T = ⎣ 0.3333 0.9428 0⎦


0 0 1

The stiffness matrix and updated transformation matrix after the Householder transmission k = 1 becomes

⎧ ⎡ ⎤

⎪ 2.5000 2.1213 0 0

⎪ ⎢ ⎥

⎪ ⎢2.1213 5.1111 3.1427 −2.0000⎥

⎪ K = PT
K P = ⎢ ⎥


2 1 1 1
⎣ 0 3.1427 3.8889 −3.5355⎦




⎨ 0 −2.0000 −3.5355 5.000
⎡ ⎤ (6–62)



⎪ 0.7071 0 0 0

⎪ ⎢ ⎥

⎪ ⎢ 0 −0.6667 0.2357 0⎥

⎪ Φ2 = Φ1 P1 = ⎢ ⎥

⎪ ⎣ 0

⎪ 0.3333 0.9428 0⎦

0 0 0 1

where the transformed matrices are calculated by means of (6-47) and (6-57), respectively.

At the Householder transformation k = 2 the following calculations are performed


⎨ α2 = 5.1111
(6–63)
⎩ k = [3.1427 − 2.0000] , |k2 | = 3.7251
2



⎪ β2 = −sign(3.1427) · 3.7251 = −3.7251



⎪ &   '    



⎪ 3.1427 1 6.8678 0.9601
⎨ w̄2 = a + sign(3.1427) · 3.7251 =a ⇒ w̄2 =
−2.0000 0 −2.0000 −0.2796 (6–64)



⎪  



⎪ −0.8436 0.5369
⎪ T
⎩ P̄2 = Ī2 − 2w̄2 w̄2 =

0.5369 0.8436

The stiffness matrix and updated transformation matrix after the Householder transmission k = 2 becomes
136 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

⎧ ⎡ ⎤

⎪ 2.5000 2.1213 0 0

⎪ ⎢ ⎥

⎪ ⎢2.1213 5.1111 −3.7251 −0.0000⎥

⎪ K = PT
K P = ⎢ ⎥


3 2 2 2
⎣ 0 −3.7251 2.0005⎦


7.4120


⎨ 0 −0.0000 2.0005 1.4769
⎡ ⎤ (6–65)



⎪ 0.7071 0 0 0

⎪ ⎢ ⎥

⎪ ⎢ 0 −0.6667 −0.1988 0.1265⎥

⎪ Φ3 = Φ2 P2 = ⎢ ⎥

⎪ ⎣ 0

⎪ 0.3333 −0.7954 0.5062⎦

0 0 0.5369 0.8436

The reader should verify that the solution matrices within the indicated accuracy fulfill Φ T3 MΦ3 = I and
ΦT3 KΦ3 = K3 .

6.5 QR Iteration
As is the case for the Householder reduction method QR-iteration operates on the standard
eigenvalue problem, so an initial similarity transformation of the GEVP to a SEVP is presumed.
 T
Let K1 = S−1 K S−1 denote the stiffness matrix after the initial similarity transformation,
where S is a solution to M = SST , cf. (3-44), (3-47).

QR iteration is based on the following property that any non-singular matrix K can be factorized
on the following form

K = QR (6–66)

where Q is an orthonormal matrix, and R is an upper triangular matrix. Hence, Q and R have
the form


Q = q1 q2 · · · qn , qTk qj = δkj (6–67)
⎡ ⎤
r11 r12 r13 r14 ··· r1n
⎢ ⎥
⎢ 0 r22 r23 r24 ··· r2n ⎥
⎢ ⎥
⎢0 0 r33 r34 ··· r3n ⎥
R=⎢
⎢0
⎥ (6–68)
⎢ 0 0 r44 ··· r4n ⎥

⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
0 0 0 0 · · · rnn

where δij denotes Kronecker’s delta. It should be noticed that the factorization (6-66) holds
even for non-symmetric matrices. The orthonormality of Q, which implies that Q −1 = QT , is
6.5 QR Iteration 137

essential to the method.

Based on K1 a sequence of transformed stiffness matrices Kk are next constructed with the QR
factors Qk and Rk according to the algorithm


Kk = Qk Rk ⎬
(6–69)
Kk+1 = QTk Kk Qk = QTk Qk Rk Qk = Rk Qk ⎭

Hence, Kk+1 is obtained by a similarity transformation with the transformation matrix Q k . The
transformation is reduced to an evaluation of R k Qk due to the orthonormality property of Q k .
For the same reason all transformed mass matrices remain unit matrices.
138 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

Box 6.6: Proof of equation (6-61)

Let k1 k2 , . . . , kn denote the column vectors of the matrix K, i.e.



K = k1 k2 · · · kn (6–70)

Since K is non-singular, k1 k2 , . . . , kn are linearly independent, and hence form a vector


basis. A new orthonormal vector basis q1 q2 · · · qn linearly dependent on k1 k2 , . . . , kn
may then be constructed by a process, which resembles the Gram-Schmidt orthogonaliza-
tion described in Section 5.5. (6-66) is identical to the following relations


⎪ k1 = r11 q1



⎪ k2 = r12 q1 + r22 q2



⎪ ..

⎨. j
%
kj = r1j q1 + r2j q2 + · · · + rjj qj = rkj qk (6–71)



⎪ .. k=1




.
%n



⎩ kn = rkn qk
k=1

(6-71) is solved sequentially downwards using the properties of orthonormality of q j .


From the 1st equation follows by scalar multiplication with q 1

1
r11 = |k1 | ⇒ q1 = k1 (6–72)
r11
Now, q1 and r11 are known. Scalar multiplication of the 2nd equation with q 1 , and use of
the orthogonality property qT1 q2 = 0, provides

1  
r12 = qT1 k2 ⇒ r22 = |k2 − r12 q1 | ⇒ q2 = k2 − r12 q1 (6–73)
r22
At the determination of qj , 1 < j ≤ n, the mutually ortonormal basis vectors
q1 , q2 , . . . , qj−1 have already been determined. Scalar multiplication of the jth equation
with qk , k = 1, 2, . . . , j − 1, and use of the orthogonality property q Tk qj = 0, provides
+ + & '
+ j−1
% + j−1
%
T + + 1
rkj = qk kj ⇒ rjj = +kj − rkj qk + ⇒ qj = kj − rkj qk (6–74)
+ + rjj
k=1 k=1

Hence a solution fulfilling all requirements has been obtained for the components r kj of
R and the column vectors qj of Q, which proves the validity of the factorization (6-66).
6.5 QR Iteration 139

Box 6.7: QR iteration algorithm


 T
Transform the GEVP to a SEVP by the similarity transformation matrix P = S−1 ,
where S is a solution to M = SST , and define the initial transformation and stiffness
matrices as
 T  T
K1 = S−1 K S−1 , Φ1 = S−1

Repeat the following items for k = 1, 2, . . .

1. Perform a QR factorization of the stiffness matrix before the kth similarity transfor-
mation
Kk = Qk Rk

2. Calculate updated transformation and stiffness matrices by a similarity transforma-


tion with the orthonormal transformation matrix Q k
Φk+1 = Φk Qk , Kk+1 = QTk Kk Qk = Rk Qk

After convergence:
⎡ ⎤
λn 0 ··· 0
⎢0 λ 0⎥
⎢ n−1 · · · ⎥
Λ=⎢ . . . . ⎥ = K∞ = R ∞ , Φ = Φ(n) Φ(n−1) · · · Φ(1) = Φ∞
⎣.. .
. . . .⎦
.
0 0 · · · λ1

Now, it can be proved that

⎡ ⎤
λn 0 ··· 0
⎢0 λ ··· 0⎥
⎢ n−1 ⎥
K∞ = R ∞ =Λ=⎢ . .. .. .. ⎥ , Φ∞ = Φ = Φ(n) Φ(n−1) · · · Φ(1) (6–75)
⎣ .. . . .⎦
0 0 · · · λ1
Qk converge to a unit matrix, as a consequence of K∞ = R∞ .

As seen, at convergence the eigen-pairs are ordered in descending order of the eigenvalues.
Moreover, the algorithm converges faster to the lowest eigenmode than to the largest, as is the
case for subspace iteration as describes in Section 7.3, a method which has some resemblance
to QR iteration. The rate of convergence seems to be rather comparable to that of subspace iter-
ation. These properties have been illustrated in Example 6.4 below. The proof of convergence
and the associated determination of the convergence rate is rather tedious and involved, and will
be omitted here.
140 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

The general QR iteration algorithm can be summarized as indicated in Box 6.7.

Usually, the QR algorithm becomes computational expensive when applied to large full ma-
trices, due to the time consuming orthogonalization process involved in the QR factorization.
However, if Kk is on the three diagonal form (6-37), it can be shown that matrices R k and Qk
have the form

⎡ ⎤
r11 r12 r13 0 0 ··· 0
⎢0 r ··· ⎥
⎢ 22 r23 r24 0 0 ⎥
⎢ ⎥
⎢0 0 r33 r34 r35 ··· 0 ⎥
⎢ ⎥
Rk = ⎢
⎢0 0 0 r44 r45 ··· 0 ⎥
⎥ (6–76)
⎢0 ··· ⎥
⎢ 0 0 0 r55 0 ⎥
⎢ .. .. .. .. .. .. .. ⎥
⎣ . . . . . . . ⎦
0 0 0 0 0 · · · rnn
⎡ ⎤
q11 q12 q13 q14 q15 ··· q1n
⎢q ··· q2n ⎥
⎢ 21 q22 q23 q24 q25 ⎥
⎢ ⎥
⎢ 0 q32 q33 q34 q35 ··· q3n ⎥
⎢ ⎥
Qk = ⎢
⎢0 0 q42 q44 q45 ··· q4n ⎥
⎥ (6–77)
⎢0 ··· q5n ⎥
⎢ 0 0 q54 q55 ⎥
⎢ .. .. .. .. .. .. .. ⎥
⎣ . . . . . . . ⎦
0 0 0 0 0 · · · qnn

Hence, Rk becomes an upper three diagonal matrix with only 3n − 3 nontrivial coefficients r jk
versus 12 n(n + 1) for a full matrix Kk . Similarly, Qk contains zeros below the first lower
diagonal. As a consequence of the indicated structure of Rk and Qk , the matrix product
Kk+1 = Rk Qk will again be a symmetric three diagonal matrix. Hence, this property is pre-
served for the transformed stiffness matrices during the iteration process. This motivates the
application of QR iteration in combination to an initial Householder reduction of the initial
generalized eigenvalue problem to three diagonal form, which is known as the HOQR method.

Example 6.5: HOQR iteration

QR iteration is performed on the stiffness matrix of Example 6.3, which has been reduced to three diagonal form
by Householder reduction. Hence, the initial stiffness matrix and updated transformation matrix reads, cf. (6-65)
⎡ ⎤ ⎡ ⎤
2.5000 2.1213 0 0 0.7071 0 0 0
⎢ ⎥ ⎢ ⎥
⎢2.1213 5.1111 −3.7251 −0.0000⎥ ⎢ 0 −0.6667 −0.1988 0.1265⎥
K1 = ⎢ ⎥ , Φ1 = ⎢ ⎥
⎣ 0 −3.7251 7.4120 2.0005⎦ ⎣ 0 0.3333 −0.7954 0.5062⎦
0 −0.0000 2.0005 1.4769 0 0 0.5369 0.8436
(6–78)

At the determination of q 1 and r11 in the 1st QR iteration the following calculations are performed, cf. (6-72)
6.5 QR Iteration 141

⎧ ⎡ ⎤ +⎡ ⎤+
⎪ 2.5000 + 2.5000 +

⎪ + +

⎪ ⎢ ⎥ +⎢ ⎥+

⎪ ⎢2.1312⎥ +⎢2.1312⎥+

⎪ k1 = ⎢ ⎥ , r11 = +⎢ ⎥+ = 3.2787

⎪ ⎣ 0 ⎦ +⎣ 0 ⎦+

⎪ + +

⎨ 0 + 0 +

⎡ ⎤ ⎡ ⎤ (6–79)



⎪ 2.5000 0.7625




⎪ 1 ⎢ ⎥ ⎢
⎢2.1312⎥ ⎢0.6470⎥


⎪ q1 = ⎢ ⎥ =⎢ ⎥

⎪ 3.2787 ⎣ 0 ⎦ ⎣ 0 ⎦


0 0

q2 and r12 , r22 are determined from the following calculations, cf. (6-73)

⎧ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤

⎪ 2.1213 0.7625 2.1213



⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎢ 5.1111⎥ ⎢0.6470⎥ ⎢ 5.1111⎥

⎪ k2 = ⎢ ⎥ , r12 = ⎢ ⎥ ⎢ ⎥ = 4.9244

⎪ ⎣−3.7251⎦ ⎣ 0 ⎦ ⎣−3.7251⎦



⎪ 0 0 0





⎪ +⎡ ⎤ ⎡ ⎤+

⎪ + 2.1213 +

⎪ + 0.7625 +
⎨ +⎢ ⎥ ⎢ ⎥+
+⎢ 5.1111⎥ ⎢0.6470⎥+

r22 = +⎢ ⎥ − 4.9244 · ⎢ ⎥+ = 4.5001 (6–80)

⎪ +⎣−3.7251⎦ ⎣ 0 ⎦+

⎪ + +

⎪ + 0 0 +





⎪ ⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤

⎪ −0.3630


2.1213 0.7625

⎪ 1 ⎜ ⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥

⎪ ⎜⎢ 5.1111⎥ ⎢0.6470⎥⎟ ⎢ 0.4278⎥

⎪ q2 = ⎜ ⎢ ⎥ − 4.9244 · ⎢ ⎥ ⎟ = ⎢ ⎥

⎪ 4.5001 ⎝⎣−3.7251⎦ ⎣ 0 ⎦⎠ ⎣−0.8278⎦


0 0 0

q3 and r13 , r23 , r33 are determined from the following calculations, cf. (6-74)

⎧ ⎡ ⎤

⎪ 0

⎪ ⎢ ⎥

⎪ ⎢−3.7251⎥

⎪ k = ⎢ ⎥ , r13 = qT1 k3 = −2.4101 , r23 = qT2 k3 = −7.7292


3
⎣ 7.4120⎦





⎪ 2.0005



⎨ + +
+ +
r33 = +k3 + 2.4101q1 + 7.7292q2+ = 2.6959 (6–81)





⎪ ⎡ ⎤

⎪ −0.3590



⎢ ⎥


⎪ 1 ⎢ 0.4231⎥

⎪ q3 = k3 + 2.4101q1 + 7.7292q2 = ⎢ ⎥

⎪ 2.6959 ⎣ 0.3761⎦


0.7421
142 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

Finally, q4 and r14 , r24 , r34 , r44 are determined from the following calculations, cf. (6-74)

⎧ ⎡ ⎤

⎪ 0

⎪ ⎢ ⎥

⎪ ⎢ 0 ⎥

⎪ k = ⎢ ⎥ , r14 = qT1 k4 = 0 , r24 = qT2 k4 = −1.6560


4
⎣ 2.0005 ⎦





⎪ 1.4769



⎨ + +
+ +
r34 = qT3 k4 = 1.8483 , r44 = +k4 − 0q1 + 1.6560q2 − 1.8483q3 + = 0.1571 (6–82)





⎪ ⎡ ⎤



⎪ 0.3974

⎢ ⎥


⎪ 1 ⎢−0.4684⎥

⎪ q = k − 0q + 1.6560q − 1.8483q = ⎢ ⎥
⎪ 4
⎪ 0.1571
4 1 2 3
⎣−0.4163⎦


0.6703

Then, at the end of the 1st iteration the following matrices are obtained

⎧ ⎡ ⎤⎫

⎪ 0.7625 −0.3630 −0.3590 0.3974 ⎪

⎪ ⎢ ⎥⎪⎪


⎪ ⎢0.6470 0.4278 0.4231 −0.4684⎥ ⎪



⎪ Q1 = ⎢ ⎥⎪⎪



⎣ 0 −0.8278 0.3761 −0.4163 ⎪
⎦ ⎪


⎪ ⎪


⎪ 0 0 0.7421 0.6703 ⎪⎬



⎪ ⇒

⎪ ⎡ ⎤ ⎪


⎪ 3.2787 4.9244 −2.4101 0 ⎪


⎪ ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢ 0 4.5001 −7.7292 −1.6560⎥ ⎪
⎪ R1 = ⎢ ⎥ ⎪


⎪ ⎣ 1.8483⎦ ⎪



0 0 2.6959 ⎪




⎪ 0 0 0 0.1571

(6–83)

⎪ ⎡ ⎤⎫



⎪ 0.5392 −0.2567 −0.2539 0.2810 ⎪⎪

⎪ ⎢ ⎥⎪

⎪ ⎢−0.4313 −0.1206 −0.2629 0.4799⎥ ⎪



⎪ Φ 2 = Φ1 Q 1 = ⎢ ⎥⎪⎪



⎪ ⎣ 0.2157 0.8010 0.2175 0.5143 ⎪
⎦ ⎪


⎪ ⎪



⎪ 0 −0.4444 0.8280 0.3420 ⎪⎬



⎪ ⎡ ⎤

⎪ ⎪


⎪ 5.6860 2.9115 0 0 ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢2.9115 8.3232 −2.2317 0 ⎥ ⎪


⎪ K2 = R 1 Q1 = ⎢ ⎥ ⎪


⎪ ⎣ 0 ⎪


⎪ −2.2317 2.3854 0.1166⎦ ⎪

⎩ ⎭
0 0 0.1166 0.1053

As seen the matrices R1 and Q1 have the structure (6-76) and (6-77). Additionally, K 2 has the same three diagonal
structure as K1 . The corresponding matrices after the 2nd and 3rd iteration become
6.5 QR Iteration 143

⎧ ⎡ ⎤⎫

⎪ 0.8901 −0.4279 −0.1566 0.0117 ⎪

⎪ ⎢ ⎥⎪⎪

⎪ ⎢ 0.3058 −0.0229⎥ ⎪

⎥⎪
⎪ 0.4558 0.8356 ⎪

⎪ Q2 = ⎢ ⎪



⎣ 0 −0.3445 0.9362 −0.0702⎦ ⎪


⎪ ⎪


⎪ 0.9972 ⎪⎬


0 0 0.0748

⎪ ⇒

⎪ ⎡ ⎤ ⎪

⎪ ⎪


⎪ 6.3881 6.3850 −1.0171 0 ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢ 0 6.4780 −2.6866 −0.0402⎥ ⎪


⎪ R2 = ⎢ ⎥ ⎪


⎪ ⎣ 0 0 1.5595 0.1170⎦ ⎪


⎪ ⎪




⎪ 0 0 0 0.0968

(6–84)

⎪ ⎡ ⎤⎫



⎪ 0.3629 −0.3577 −0.3795 0.3103 ⎪


⎪ ⎢ ⎥⎪⎪



⎪ ⎢−0.4389 0.1744 −0.1796 0.4947 ⎥ ⎪


⎪ Φ 3 = Φ2 Q 2 = ⎢ ⎥ ⎪


⎪ ⎣ 0.5570 0.5021 0.4533 0.4818 ⎦ ⎪

⎪ ⎪


⎪ ⎪


⎪ −0.2026 −0.6566 0.6648 0.2931 ⎬



⎪ ⎡ ⎤ ⎪

⎪ ⎪

⎪ 8.5962 2.9525 0 0 ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢ 2.9525 6.3386 −0.5372 0 ⎥ ⎪


⎪ K = R Q = ⎢ ⎥ ⎪



3 2 2
⎣ ⎦ ⎪


⎪ 0 −0.5372 1.4687 0.0072 ⎪

⎩ ⎭
0 0 0.0072 0.0966

⎧ ⎡ ⎤⎫

⎪ 0.9458 −0.3230 −0.0345 0.0002 ⎪

⎪ ⎢ ⎥⎪⎪

⎪ ⎢0.3248 0.1003 −0.0005⎥ ⎪

⎥⎪
⎪ 0.9404 ⎪

⎪ Q3 = ⎢ ⎪



⎣ 0 −0.1061 0.9943 −0.0051⎦ ⎪


⎪ ⎪


⎪ 1.0000 ⎪⎬


0 0 0.0051

⎪ ⇒

⎪ ⎡ ⎤ ⎪

⎪ ⎪


⎪ 9.0891 4.8514 −0.1745 0 ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢ 0 5.0643 −0.6610 −0.0008⎥ ⎪


⎪ R3 = ⎢ ⎥ ⎪


⎪ ⎣ 0 0 1.4065 0.0077⎦ ⎪


⎪ ⎪




⎪ 0 0 0 0.0965

(6–85)

⎪ ⎡ ⎤⎫



⎪ 0.2270 −0.4134 −0.4242 0.3125 ⎪

⎪ ⎢ ⎥⎪⎪


⎪ ⎢−0.3584 0.3248 −0.1434 0.4954⎥ ⎪ ⎪



⎪ Φ = Φ Q = ⎢ ⎥ ⎪



4 3 3
⎣ 0.6899 0.2442 0.4844 0.4793 ⎦ ⎪

⎪ ⎪


⎪ ⎪


⎪ −0.4049 −0.6226 0.6036 0.2900 ⎬



⎪ ⎡ ⎤ ⎪

⎪ ⎪

⎪ 10.172 1.6451 0 0 ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢ 1.6451 4.8328 −0.1492 0 ⎥ ⎪


⎪ K = R Q = ⎢ ⎥ ⎪

⎪ ⎦ ⎪ ⎪
4 3 3

⎪ ⎣ 0 −0.1492 1.3986 0.0005 ⎪

⎩ ⎪

0 0 0.0005 0.0965

As seen from R3 and K4 the terms in the main diagonal have already after the 3rd iteration grouped in descending
magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in Box 6.7. Moreover, for
both matrices convergence to the lowest eigenvalue λ 1 = 0.0965 has occurred, illustrating the fact that the QR
algorithm converge faster to the lowest eigenmode than to the highest.
144 Chapter 6 – SIMILARITY TRANSFORMATION METHODS

The matrices after the 14th iteration become

⎧ ⎡ ⎤⎫

⎪ 1.0000 −0.0000 −0.0000 0.0000 ⎪

⎪ ⎢ ⎥⎪⎪


⎪ ⎢0.0000 1.0000 0.0000 −0.0000⎥ ⎪



⎪ Q14 = ⎢ ⎥⎪⎪



⎣ 0 −0.0000 1.0000 −0.0000⎦ ⎪



⎪ ⎪


⎪ 0 0 0.0051 1.0000 ⎪⎬



⎪ ⇒

⎪ ⎡ ⎤ ⎪


⎪ 10.638 0.0003 −0.0000 0 ⎪


⎪ ⎪


⎪ ⎢ ⎥ ⎪


⎪ ⎢0.0000 4.3735 −0.0000 −0.0008⎥ ⎪

⎪ R14 = ⎢ ⎥ ⎪


⎪ ⎣ 0 0 1.3915 0.0077⎦ ⎪


⎪ ⎪


⎪ 0 0 0 0.0965


(6–86)

⎪ ⎡ ⎤⎫



⎪ 0.1076 −0.4387 −0.4453 0.3126 ⎪ ⎪

⎪ ⎢ ⎥⎪

⎪ ⎢ −0.2556 0.4167 −0.1244 0.4955⎥ ⎪ ⎪


⎪ Φ15 = Φ14 Q14 = ⎢ ⎥⎪⎪



⎪ ⎣ 0.7283 0.0232 0.4894 0.4791 ⎦ ⎪


⎪ ⎪


⎪ −0.5620 −0.5170 ⎪


⎪ 0.5770 0.2898



⎪ ⎡ ⎤ ⎪

⎪ ⎪


⎪ 10.638 0.0001 0 0 ⎪


⎪ ⎢ ⎥ ⎪

⎪ ⎢ 0.0001 4.3735 −0.0000 0 ⎥ ⎪


⎪ K = R Q = ⎢ ⎥ ⎪



15 14 14
⎣ ⎦ ⎪


⎪ 0 −0.0000 1.3915 0.0000 ⎪

⎩ ⎭
0 0 0.0000 0.0965

Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for the eigen-
values and eigenmodes of the original general eigenvalue problem

⎡ ⎤ ⎡ ⎤ ⎫
λ4 0 0 0 10.638 0 0 0 ⎪

⎢ ⎥ ⎢ ⎥ ⎪

⎢0 λ3 0 0⎥ ⎢ 0 4.3735 0 0 ⎥ ⎪

Λ=⎢ ⎥ = K15 = ⎢ ⎥ ⎪

⎣0 ⎪

0 λ2 0⎦ ⎣ 0 0 1.3915 0 ⎦ ⎪



0 0 0 λ1 0 0 0 0.0965 ⎪


⎡ ⎤ (6–87)
0.1076 −0.4387 −0.4453 0.3126 ⎪⎪

⎢ ⎥⎪⎪
⎢ −0.2556 0.4167 −0.1244 0.4955⎥⎪⎪
Φ = Φ(4) Φ(3) Φ(2) Φ(1) = Φ15 =⎢ ⎥⎪⎪
⎣ 0.7283 0.0232 0.4894 0.4791⎦ ⎪




−0.5620 −0.5170 0.5770 0.2898 ⎪⎪

The reader should verify that the solution matrices within the indicated accuracy fulfill Φ T MΦ = I and ΦT KΦ =
Λ, where M and K are the mass and stiffness matrices given by (5-74). (6-87) agrees with the results (5-75), (5-
79), (5-81) and (5-82) in Example 5.6.
6.6 Exercises 145

6.6 Exercises
6.1 Given a symmetric matrix K in a special eigenvalue problem.
(a.) Write a MATLAB program, which performs special Jacobi iteration.

6.2 Given the symmetric matrices M and K.


(a.) Write a MATLAB program, which performs general Jacobi iteration.

6.3 Given the following mass- and stiffness matrices defined in Exercise 4.2.
(a.) Perform an initial transformation to a special eigenvalue problem, and calculate the
eigenvalues and eigenvectors by means of standard Jacobi iteration.
(b.) Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi
iteration operating on the original general eigenvalue problem.

6.4 Given the symmetric matrices M and K of dimension n ≥ 3.


(a.) Write a MATLAB program, which performs a Householder reduction to three diago-
nal form.

6.5 Given the symmetric matrices M and K.


(a.) Write a MATLAB program, which performs QR iteration.

6.6 Consider the mass- and stiffness matrices defined in Exercise 4.2 after the transformation
to the special eigenvalue problem as performed in Exercise 6.3.
(a.) Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.
146 Chapter 6 – SIMILARITY TRANSFORMATION METHODS
C HAPTER 7
SOLUTION OF LARGE EIGENVALUE
PROBLEMS

7.1 Introduction
In civil engineering large numerical models with n = 10 4 − 106 degrees of freedom have be-
come common practise along with the development of computer technology. However, most
natural and man made loads such as wind, waves, earthquakes and traffic have spectral contents
in the low frequency range. As a consequence only a relatively small number n 1
n of the
lowest structural modes will contribute to the global structural dynamic response. In this chap-
ter methods will be discussed, which have been devised with this specific fact in mind.

Sections 7.2 and 7.3 deals with simultaneous inverse vector iteration and socalled subspace
iteration, respectively. In both cases a sequence of subspaces are defined, each of which are
spanned by a specific system of basis vectors. The idea is that these subspaces at the end of the
iteration process contains the n1 lowest eigenmodes Φ(1) , Φ(2) , . . . , Φ(n1 ) of the general eigen-
value problem (1-9). These eigenvalue problems may be assembled on the following matrix
form, cf. (1-14), (1-15), (1-16)

⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
K[Φ(1) Φ(2) · · · Φ(n1 ) ] = M[Φ(1) Φ(2) · · · Φ(n1 ) ] ⎢ . . .. .. ⎥ ⇒
⎣ .. .. . . ⎦
0 0 · · · λn 1
KΦ = MΦΛ (7–1)
⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
Λ=⎢. . .. .. ⎥ (7–2)
⎣ .. .. . . ⎦
0 0 · · · λn 1

By contrast to the formulation in Chapter 6 the modal matrix Φ is no longer quadratic, but has
the dimension n × n1 , defined as

— 147 —
148 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS


Φ = Φ(1) Φ(2) · · · Φ(n1 ) (7–3)

V∞

(1)
(2) Φ∞ = Φ(1)
Φ∞ = Φ(2)
(2) V0
Φ0

(1)
Φ0

Fig. 7–1 Principle of subspace iteration.

The principle of iterating through a sequence of subspaces has been illustratedin Fig. 7-1. V 0
(1) (2)
denotes a start subspace, which is spanned by the start basis Φ 0 = Φ0 Φ0 . The iteration
process passes through
 a sequence of subspaces
 V1 , V2 , . . ., where Vk is spanned by the basis
(1) (2) (1) (2)
Φk = Φk Φk . At convergence, Φ∞ = Φ∞ Φ∞ = Φ(1) Φ(2) is spanning the limiting
subspace V∞ containing the eigenmodes searched for.

Simultaneous inverse inverse vector iteration is a generalization of the inverse vector iteration
and inverse vector iteration with deflation described in Sections 5.2 and 5.5. The start vector
basis converges towards a basis made up of the wanted eigenmodes as shown in Fig. 7-1.

The subspace iteration method and socalled subspace iteration described in Section 7.2 is in
principle a sequence of Rayleigh-Ritz analyses, where the Ritz base vectors are forced to con-
verge to each of the eigenmodes. As a consequence, if the start basis contains the n 1 eigenmodes
the subspace iteration converge in a single step as described in Section 7.2, which is generally
not the case for simultaneous inverse vector iteration. Being based on a convergence of a se-
quence of vector bases both methods are in fact subspace iteration methods, although this name
has been coined solely for the latter method. A more informative name for this method would
probably be Rayleigh-Ritz iteration.

Section 7.4 deals with characteristic polynomial iteration methods, which operates on the char-
acteristic equation (1-10). These methods form an alternative to inverse or forward vector it-
eration with deflation in case some specific eigenmode different from the smallest or largest is
searched for. To be numerical effective these methods require that the generalized eigenvalue
7.2 Simultaneous Inverse Vector Iteration 149

problem has been reduced to a standard eigenvalue problem on three diagonal form, such as the
Householder reduction described in Section 6.4. Polynomial methods may be based either on
the numerical iteration of the characteristic polynomial directly, or based on a Sturm sequence
iteration. Even in the first mentioned case a Sturm sequence check should be performed after
the calculation to verify that the calculated n1 eigenmodes are indeed the lowest.

It should be noticed that some problems in structural dynamics, such as acoustic transmission
and noise emission, are governed by high frequency structural response. Additional to the nu-
merical problems in calculating these modes, lack of accuracy of the underlying mechanical
models in the high-frequency range adds to the problems in using modal analysis in such high
frequency cases.

7.2 Simultaneous Inverse Vector Iteration


 
(1) (2) (n1 )
Let Φ0 = Φ0 Φ0 · · · Φ0 denote n1 arbitrary linearly independent vectors, which span
an n1 dimensional start subspace. Next, the algorithm for simultaneous inverse vector iteration
takes place according to the algorithm

Φ̄k+1 = AΦk , k = 0, 1, . . . (7–4)

where A = K−1 M, cf. (5-4). (7-4) is identical to the inverse vector iteration algorithm de-
scribed by (5-4). The only difference is that now n1 vectors are simultaneous iterated.

At convergence the iterated base vectors obtained from (7-4) will span an n1 -dimensional sub-
space containing the n1 lowest eigenmodes. However, due to the inherent properties of the
inverse vector iteration algorithm all the iterated base vectors tend to become mutually parallel,
and parallel to the lowest eigenmode Φ(1) . Hence, the vector basis becomes more and more ill
conditioned. For the case shown on Fig. 7-1 this means that the subspace V k will converge to
(1) (2)
the limit plane V∞ , but the iterated base vectors Φk and Φk become more and more parallel.
In order to prevent this the method is combined with a Gram-Schmidt orthogonalization pro-
cedure. Similar to the QR factorization procedure described in Box 6.6 the iterated basis Φ̄k+1
can be written on the following factorized form

Φ̄k+1 = Φk+1 Rk+1 (7–5)

where Φk+1 is an M-orthonormal basis in the iterated subspace, and R k+1 is an upper triangular
matrix. Hence, Φk+1 and Rk+1 have the properties

 
(1) (2) (n1 ) (i) T (j)
Φk+1 = Φk+1 Φk+1 · · · Φk+1 , Φk+1 MΦk+1 = δij (7–6)
150 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

⎡ ⎤
r11 r12 r13 ··· r1n1
⎢ ⎥
⎢ 0 r22 r23 ··· r2n1 ⎥
⎢ ⎥
Rk+1 =⎢
⎢0 0 r33 ··· r3n1 ⎥
⎥ (7–7)
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
0 0 0 · · · rn 1 n 1
 
(1) (2) (n1 )
The M-orthonormal base vectors Φk+1 = Φk+1 Φk+1 · · · Φk+1 spanning the iterated subspace
Vk+1 , as well as the components of the triangular matrix R k+1, are determined sequentially
in much the same way as the determination of the matrices Q and R in the QR factorization
described by (6-66) - (6-69). At first, it is noticed that (7-5) is identical to the following relations
⎧ (1)


(1)
Φ̄k+1 = r11 Φk+1



⎪ (2) (1) (2)

⎪ Φ̄k+1 = r12 Φk+1 + r22 Φk+1




.
⎨ .. j
%
Φ̄
(j)
=
(1)
r1j Φk+1 +
(2)
r2j Φk+1 +···+
(j)
rjj Φk+1 =
(i)
rij Φk+1 (7–8)

⎪ k+1



⎪ .. i=1

⎪ .

⎪ %
n1
⎪ (n1 ) (i)
⎩ Φ̄k+1 =
⎪ rin1 Φk+1
i=1

(7-8) is solved sequentially downwards using the M-orthonormality of the already determined
(j)
base vectors Φk+1 . The details of the derivation has been given in Box 7.1.

After convergence the eigenvalues are obtained from the Rayleigh quotients evaluated with the
calculated eigenvectors, cf. (4-25). Since each of the n1 eigenmodes have been normalized to
unit modal mass the quotients become

λj = Φ(j) T KΦ(j) , j = 1, . . . , n1 (7–9)

The Rayleigh quotients in (7-9) may be assembled in the following matrix equation

Λ = ΦT KΦ (7–10)

where

⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
Λ=⎢. . .. .. ⎥ , Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞ (7–11)
⎣ .. .. . . ⎦
0 0 · · · λn 1
7.2 Simultaneous Inverse Vector Iteration 151

It can be proved that the upper triangular matrix R k+1 converges towards the diagonal matrix
Λ−1 . Although the Rayleigh quotients (7-10) provides more accurate estimates, the eigenvalues
may then as an alternative be retrieved from

Λ = R−1
∞ (7–12)
152 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

Box 7.1: M-orthonormalization of iterated basis

Evaluating the modal mass on both sides of the 1st equation of (10-8) provides

, (1) , 1 (1)
r11 = ,Φ̄k+1 ,
(1)
⇒ Φk+1 = Φ̄ (7–13)
r11 k+1
, (1) ,
where the norm ,Φ̄k+1 , represents the square root of the modal mass of Φ̄k+1 defined as
(1)

, (1) , (1) T
12
,Φ̄ , = Φ̄ (1)
k+1 k+1 MΦ̄k+1 (7–14)

(1)
Now, Φk+1 and r11 are known. Scalar pre-multiplication of the 2nd equation
(1) T (1) T (2)
with Φk+1 M, and use of the orthonormality properties Φ k+1 MΦk+1 = 0 and
(1) T (1)
Φk+1 MΦk+1 = 1, provides
, (2) ,
r22 = ,Φ̄k+1 − r12 Φk+1 ,
(1) T (2) (1)
r12 = Φk+1 MΦ̄k+1 ⇒ ⇒

(2) 1 (2) (1)


Φk+1 = Φ̄k+1 − r12 Φk+1 (7–15)


r22
(j)
At the determination of Φk+1 , 1 < j ≤ n1 , the mutually ortonormal basis vectors
(1) (2) (j−1)
Φk+1 , Φk+1, . . . , Φk+1 have already been determined. Scalar pre-multiplication of the
(i) T
jth equation with Φk+1 M, i = 1, 2, . . . , j − 1, and use of the orthogonality property
(i) T (j)
Φk+1 MΦk+1 = 0 provides
, ,
, j−1
% ,
(i) T (j) , (j) (i) ,
rij = Φk+1 MΦ̄k+1 ⇒ rjj = ,Φ̄k+1 − rij Φk+1 , ⇒
, ,
i=1
& j−1
'
(j) 1 (j)
% (i)
Φk+1 = Φ̄k+1 − rij Φk+1 (7–16)
rjj i=1

It is characteristic for simultaneous inverse vector method in contrast to the subspace iteration
method described in Section 7.3, that eigenmodes which at one level of the iteration process
is contained in the iterated subspace, may move out of the iterated subspace at later levels as
illustrated in Example 7.1.
7.2 Simultaneous Inverse Vector Iteration 153

Box 7.2: Simultaneous inverse vector iteration algorithm


 
(1) (2) (n )
Given the n1 -dimensional start vector basis Φ0 = Φ0 Φ0 · · · Φ0 1 . The base vectors
must be linearly independent, but need not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .

1. Perform simultaneous inverse vector iteration:

Φ̄k+1 = AΦk , A = K−1 M

2. Perform Gram-Schmidt orthogonalization Gram-Schmidt orthogonalization to ob-


tain a new M-orthonormal iterated vector basis Φk+1 as explained by (7-12) - (7-16)
corresponding to the factorization:

Φ̄k+1 = Φk+1 Rk+1

After convergence has been achieved the eigenvalues and eigenmodes normalized to unit
modal mass are obtained from:

⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
Λ=⎢. .. .. .. ⎥ = ΦT∞ KΦ∞ = R−1
∞ , Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
⎣ .. . . . ⎦
0 0 · · · λn 1

As for all kind of inverse vector iteration methods the convergence rate of the iteration vector is
linear in the quantity

λ λ2 λn 1

1
r1 = max , ,..., (7–17)
λ2 λ3 λn1 +1
Correspondingly, the Rayleigh quotients (7-9) have quadratic convergence rate r2 = r12 .

The simultaneous inverse vector iteration algorithm always converge towards the lowest n 1
eigenmodes. Hence, no Sturm sequence check is needed to ensure that these modes have in-
deed been calculated. Further, the rate of convergence seems to be comparable for all modes
contained in the subspace, as demonstrated in Example 7.1 below.

The simultaneous inverse vector iteration algorithm may be summarized as indicated in Box
7.2.
154 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

Example 7.1: Simultaneous inverse vector iteration

Consider the generalized eigenvalue problem defined in Example 1.4. Calculate the two lowest eigenmodes and
corresponding eigenvalues by simultaneous inverse vector iteration with the start vector basis

⎡ ⎤
  0 2
(1) (2) ⎢ ⎥
Φ0 = Φ0 Φ0 = ⎣1 1⎦ (7–18)
2 0

The matrix A becomes, cf. (6-44)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 −1 0 1
0 0 0.2917 0.1667 0.0417
⎢ ⎥ ⎢ 2
⎥ ⎢ ⎥
A = K−1 M = ⎣−1 4 −1⎦ ⎣ 0 1 0 ⎦ = ⎣0.0833 0.3333 0.0833⎦ (7–19)
0 −1 2 0 0 1
2 0.0417 0.1667 0.2917

Then, the 1st iterated vector basis becomes, cf. (7-4)

⎡ ⎤⎡ ⎤ ⎡ ⎤
  0.2917 0.1667 0.0417 0 2 0.2500 0.7500
(1) (2) ⎢ ⎥⎢ ⎥ ⎢ ⎥
Φ̄1 = Φ̄1 Φ̄1 = AΦ0 = ⎣0.0833 0.3333 0.0833⎦ ⎣1 1⎦ = ⎣0.5000 0.5000⎦ (7–20)
0.0417 0.1667 0.2917 2 0 0.7500 0.2500
(1)
At the determination of Φ 1 and r11 in the 1st vector iteration the following calculations are performed, cf. (7-13)

⎧ ⎛⎡
⎪ ⎡ ⎤ ⎤T ⎡ ⎤⎡ ⎤⎞ 12

⎪ 0.2500 , , ⎜ 0.2500
1
0 0 0.2500 ⎟

⎪ ⎢ ⎥ , (1) , ⎜⎢ ⎥ ⎢2 ⎥⎢ ⎥

⎪ 0 ⎦ ⎣0.5000⎦⎟
(1)

⎪ Φ̄1 = ⎣0.5000⎦ , r11 = ,Φ̄1 , = ⎝⎣0.5000⎦ ⎣ 0 1 ⎠ = 0.7500


⎨ 0.7500 0.7500 0 0 1
2 0.7500
(7–21)

⎪ ⎡ ⎤ ⎡ ⎤



⎪ 0.2500 0.3333

⎪ 1 ⎢ ⎥ ⎢ ⎥


(1)
Φ1 = ⎣0.5000⎦ = ⎣0.6667⎦

⎩ 0.7500
0.7500 1.0000

(2)
Φ1 and r12 , r22 are determined from the following calculations, cf. (7-15)

⎧ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤⎡ ⎤

⎪ 0.7500 0.3333 1
0 0 0.7500

⎪ ⎢ ⎥ ⎢ ⎥ ⎢ 2
⎥⎢ ⎥

⎪ (2)
Φ̄1 = ⎣0.5000⎦ , r12 = ⎣0.6667⎦ ⎣ 0 1 0 ⎦ ⎣0.5000⎦ = 0.5833





⎪ 0.2500 1.0000 0 0 12 0.2500



⎪ ,⎡ ⎤ ⎡ ⎤,

⎪ , 0.7500

⎨ , 0.3333 ,,
,⎢ ⎥ ⎢ ⎥,
r22 = ,⎣0.5000⎦ − 0.5833 · ⎣0.6667⎦, = 0.4714 (7–22)

⎪ , ,

⎪ , 0.2500 1.0000 ,





⎪ ⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤



⎪ 0.7500 0.3333 1.1785

⎪ 1 ⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥


(2)
Φ1 = ⎝⎣0.5000⎦ − 0.5833 · ⎣0.6667⎦⎠ = ⎣ 0.2357⎦

⎩ 0.4714
0.2500 1.0000 −0.7071
7.2 Simultaneous Inverse Vector Iteration 155

Then, at the end of the 1st iteration the following matrices are obtained

⎧  

⎪ 0.7500 0.5833

⎪ R1 =



⎪ 0 0.4714

⎡ ⎤ (7–23)

⎪ 0.3333 1.1785

⎪ ⎢ ⎥

⎪ Φ1 = ⎣0.6667 0.2357⎦



1.0000 −0.7071

The reader should verify that Φ 1 R1 = Φ̄1 . The corresponding matrices after the 2nd and 3rd iteration become

⎧  

⎪ 0.4787 0.1231

⎪ R2 =



⎪ 0 0.2611

⎡ ⎤ (7–24)

⎪ 0.5222 1.1078

⎪ ⎢ ⎥

⎪ Φ2 = ⎣0.6963 0.1231⎦



0.8704 −0.8616

⎧  

⎪ 0.4943 0.0650

⎪ R3 =



⎪ 0 0.2529

⎡ ⎤ (7–25)

⎪ 0.6163 1.0583

⎪ ⎢ ⎥

⎪ Φ3 = ⎣0.7043 0.0623⎦



0.7924 −0.9339

Convergence of the eigenmodes with the indicated number of digits were achieved after 14 iterations, where

⎧  

⎪ 0.5000 0.0000

⎪ R14 =



⎪ 0 0.2500

⎡ ⎤ (7–26)

⎪ 0.7071 1.0000

⎪ ⎢ ⎥

⎪ Φ14 = ⎣0.7071 0.0000⎦



0.7071 −1.0000

Presuming that convergence has occurred after the 14th iteration the following eigenvalues are obtained from
(7-10) and (7-12)
   ⎫
λ1 0 2.0000 −0.0000 ⎪ ⎪
Λ= T −1
= Φ14 KΦ14 = R∞ = ⎪

0 λ2 −0.0000 4.0000 ⎪ ⎪




⎡ ⎤
(7–27)
  0.7071 1.0000 ⎪

⎢ ⎥ ⎪

Φ= Φ Φ (1) (2)
= Φ14 = ⎣0.7071 0.0000⎦ ⎪



0.7071 −1.0000 ⎪

156 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

   
λ3 = 6, see (1-87). Then, the convergence rate of the iteration vectors becomes r 1 = max λλ12 , λλ23 = max 24 , 46 =
2
3 , cf. (7-17). This is a relatively large number, which is displayed in the rather slow convergence of the iterative
process. The convergence towards Φ (1) and Φ(2) occurred within the same iteration step. This suggests that the
convergence rate is uniform to all considered modes in the subspace.

Further it is noted that

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎫
√ 1 √ 0 √ 2 √ √ ⎪

2⎢ ⎥ 2 ⎢ ⎥ 2 ⎢ ⎥ 2 2 (2) ⎪

· Φ0 ⎪
(1)
Φ(1) = ⎣1⎦ = · ⎣1⎦ + · ⎣1⎦ = · Φ0 + ⎪

2 4 4 4 4 ⎪

1 2 0 ⎪

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ (7–28)


1 0 2 ⎪

⎢ ⎥ 1 ⎢ ⎥ 1 ⎢ ⎥ 1 1 ⎪

Φ(2) = ⎣ 0⎦ = −
(1) (2)
· ⎣1⎦ + · ⎣1⎦ = − · Φ0 + · Φ0 ⎪

2 2 2 2 ⎪


−1 2 0

Hence, the 1st and 2nd eigenmode are originally in the subspace spanned by the basis Φ 0 . As seen during the
iteration process these eigenmodes are moving out of the iterated subspace.

7.3 Subspace Iteration


As is the case for the simultaneous inverse vector iteration algorithm the subspace
 iteration algo- 
(1) (2) (n1 )
rithm presumes that a start subspace V 0 , spanned by the vector basis Φ0 = Φ0 Φ0 · · · Φ0 ,
has been defined.
 
(1) (2) (n )
At the kth iteration step of the iteration process a vector basis Φ k = Φk Φk · · · Φk 1 ,
which spans the iterated subspace Vk , has been obtained. Based on this a simultaneous inverse
vector iteration is performed

Φ̄k+1 = AΦk , k = 0, 1, . . . (7–29)

where A = K−1 M, cf. (5-4). Next, a Rayleigh-Ritz analysis is performed using Φ̄k+1 as a
Ritz basis, in order to obtain approximate solutions to the lowest n 1 eigenmodes and eigenval-
ues. This requires the solution of the following reduced generalized eigenvalue problem of the
dimension n1 , cf. (1-14), (4-49)

K̃k+1 Qk+1 = M̃k+1 Qk+1 Rk+1 , k = 0, 1, . . . (7–30)

M̃k+1 and K̃k+1 denote the mass and stiffness matrices projected on the subspace V k+1, cf.
(4-45)
7.3 Subspace Iteration 157


M̃k+1 = Φ̄Tk+1 MΦ̄k+1 ⎬
(7–31)

K̃k+1 = Φ̄Tk+1 KΦ̄k+1
 
(1) (2) (n1 )
Qk+1 = qk+1 qk+1 · · · qk+1 of the dimension n1 × n1 contains the eigenvectors of the eigen-
(i)
value problem (7-30). In what follows the eigenvectors q k+1 are assumed to normalized to unit
modal mass with respect to the projected mass matrix, i.e.

⎨0 , i = j
(i) T (j)
qk+1 M̃k+1 qk+1 = (7–32)
⎩1 , i=j
Rk+1 is a diagonal matrix containing the corresponding eigenvalues of (7-29) in the main diag-
onal

⎡ ⎤
ρ1,k+1 0 ··· 0
⎢ 0 ρ2,k+1 · · · 0 ⎥
⎢ ⎥
Rk+1 =⎢ . .. .. ⎥ (7–33)
⎣ .. . . 0 ⎦
0 0 · · · ρn1 ,k+1
The eigenvalues ρj,k+1 , j = 1, . . . , n1 indicates the estimate of the eigenvalues after the kth
iteration. These are all upperbounds to the corresponding eigenvalues of the full problem, cf.
(4-57).

At the end of the kth iteration step a new estimate of the lowest n 1 eigenvectors are determined
from, cf. (4-51)

Φk+1 = Φ̄k+1 Qk+1 (7–34)


If the column vectors in Qk+1 have been normalized to unit modal mass with respect to M̃k+1 ,
the M-orthogonal column vectors of Φk+1 will automatically be normalized to unit modal mass
with respect to M, cf. (4-55).

Next, the calculations in (7-30), (7-31), (7-34) are repeated with the new estimate of the nor-
malized eigenmodes Φk+1 .

At convergence of the subspace iteration algorithm the lowest n 1 eigenvectors and eigenvalues
are retrieved from

⎡ ⎤
λ1 0 · · · 0
⎢ 0 λ ··· 0 ⎥
⎢ 2 ⎥
Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞ , Λ=⎢. . .. ⎥ = R∞ = ±Q∞ (7–35)
⎣ .. .. . 0 ⎦
0 0 · · · λn1
158 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

At convergence, Q∞ can be shown to be a diagonal matrix, where the numerical value of the
components are equal to the eigenvalue of the original problem as indicated in (7-35).

It should be realized that subspace iteration involves iteration at two levels. Primary, a global
simultaneous inverse vector iteration loop as defined by the index k is performed. Inside this
loop a secondary iteration process is performed at the solution of the eigenvalue problem (7-30).
Usually, the latter problem is solved iteratively by means of a general Jacobi iteration algorithm
as described in Section 6.3. Because the applied similarity transformations in the general Jacobi
(j)
algorithm are not orthonormal, the eigenvectors q k are not normalized to unit modal mass at
convergence. Hence, in order to fulfill the requirements (7-32) this normalization should be
performed after convergence. Further, the eigenvalues will not be ordered in ascending order of
magnitude as presumed in (7-35), cf. Box 6.4.

The convergence rate for the components in the kth eigenmode and the kth eigenvalue, r 1,k and
r2,k , are defined as


λk ⎪

r1,k = ⎪

λn1 +1
, k = 1, . . . , n1 (7–36)
λ2k ⎪

r2,k = = 2
r1,k ⎪

λ2n1 +1

Hence, convergence is achieved at first for the lowest mode and latest for mode k = n1 , as
has been demonstrated in Example 7.2 below. This represents a marked difference from simul-
taneous inverse vector iteration, where as mentioned the convergence rate seems to be almost
identical for all modes contained in the subspace. A rule of thumb says that approximately 10
subspace iterations are needed to obtain a solution for the components of Φ (1) with 6 correct
digits.
7.3 Subspace Iteration 159

Box 7.3: Subspace iteration algorithm


(1) (2) (n )
Given the n1 -dimensional start vector basis Φ0 = Φ0 Φ0 · · · Φ0 1 . The base vectors
must be linearly independent, but the base vectors need not be normalized to unit modal
mass. Repeat the following items for k = 0, 1, . . .

1. Perform simultaneous inverse vector iteration:

Φ̄k+1 = AΦk , A = K−1 M

2. Calculate projected mass and stiffness matrices:

M̃k+1 = Φ̄Tk+1 MΦ̄k+1 , K̃k+1 = Φ̄Tk+1 KΦ̄k+1

3. Solve the generalized eigenvalue problem of dimension n 1 by means of a general


Jacobi iteration algorithm with the eigenvectors Q k+1 normalized to unit modal mass
at exit:
K̃k+1 Qk+1 = M̃k+1 Qk+1 Rk+1

4. Calculate new solution to eigenvectors:

Φk+1 = Φ̄k+1 Qk+1

After convergence has been achieved the eigenvalues and eigenmodes normalized to unit
modal mass are obtained from:

⎡ ⎤
λ1 0 ··· 0
⎢0 λ ··· ⎥
⎢ 2 0 ⎥
Λ=⎢. . .. .. ⎥ = R∞ = ±Q∞ , Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
⎣.. .
. . . ⎦
0 0 · · · λn 1

Finally, a Sturm sequence check should be performed to ensure that the lowest n 1 eigen-
pairs have been calculated.

In order to speed up the iteration process towards the n1 modes actually wanted, the dimension
of the iterated subspace is sometimes increased to n 2 > n1 . Then, the convergence rate of the
iteration vector the highest mode of interest decreases to

λn 1
r1,n1 = (7–37)
λn2 +1
In case of an adverse choice of the start basis vector Φ0 it may happen that one of the eigen-
160 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

modes searched for, Φ(j) , j = 1, . . . , n1 , is M-orthogonal to start subspace, i.e.

Φ(j) T MΦ0 = 0
(k)
, k = 1, 2, . . . , n1 (7–38)

In this case the subspace iteration algorithm converges towards the eigenmodes Φ (1) , . . . , Φ(j−1) ,
Φ(j+1) , . . . , Φ(n1 ) , Φ(n1 +1) . In principle a similar problem occurs in simultaneous inverse vector
iteration, although round-off errors normally eliminates this possibility.

Singular to subspace iteration is that eigenmodes contained in the initial basis Φ 0 remain in
later iterated bases. Hence, if Φ(j) , j = n1 + 1, . . . , n is contained in Φ0 , this mode will be
among the calculated modes.

In both cases we are left with the problem to decide whether the calculated n1 eigenmodes
are the lowest n1 modes of the full system. For this reason a subspace iteration should always
be followed by a Sturm sequence check. This is performed in the following way. Let µ be a
number slightly larger than the largest calculated eigenvalue ρ n1 ,∞ , and perform the following
Gauss factorization of the matrix K − µM

K − µM = LDLT (7–39)

where L and D are given by (3-2), (3-3). The number of eigenvalue less than µ is equal
to the number of negative elements in the diagonal of the diagonal matrix D, cf. Section
3.1. Hence, the analysis should show exactly n 1 negative elements in D. Alternatively, the
same information
 may be withdrawn
 (n−1)  from
 the number
 of sign changes in the sign sequence
(n)
sign P (µ) , sign P (µ) , . . . , sign P (µ) , where P (n−1) (µ), . . . , P (0) (µ) denotes the
(0)

Sturm sequence of characteristic polynomials, and P (n) (µ) is a dummy positive component in
the sequence, cf. Section 3.2.

The marked difference between the subspace iteration algorithm and and the simultaneous in-
verse vector iteration algorithm is that the orthonormalization process to prevent ill-conditioning
of the iterated vector base in the former case is performed by an eigenvector approach related
to the Rayleigh-Ritz analysis, whereas a Gram-Schmidt orthogonalization procedure is used in
the latter case. There are no marked difference in the rate of convergence of the two algorithms.

Example 7.2: Subspace iteration

The generalized eigenvalue problem defined in Example 6.2 is considered again. Using the same initial start basis
(7-18) as in Example 7.1, the problem is solved in this example by means of subspace iteration.

At the 1st iteration step (k = 0) the simultaneous inverse vector iteration produces the vector basis Φ̄1 , which is
unchanged given by (7-20).

Based on Φ̄1 the following projected mass and stiffness matrices are calculated, cf. (4-45), (7-20), (7-31)
7.3 Subspace Iteration 161

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎫
1   ⎪
0.2500 0.7500 0 0 0.2500 0.7500 ⎪

T ⎢ ⎥ ⎢2 ⎥⎢ ⎥ 0.5625 0.4375 ⎪

M̃1 = Φ̄1 MΦ̄1 = ⎣0.5000 0.5000⎦ ⎣ 0 1 0 ⎦ ⎣0.5000 0.5000⎦ = ⎪

0.4375 0.5625 ⎪

0.7500 0.2500 0 0 1
0.7500 0.2500 ⎪

2

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪
2 −1  ⎪


1.2500 0.7500 ⎪
0.2500 0.7500 0 0.2500 0.7500
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎪

T
K̃1 = Φ̄1 KΦ̄1 = ⎣0.5000 0.5000⎦ ⎣−1 4 −1⎦ ⎣0.5000 0.5000⎦ = ⎪

0.7500 1.2500 ⎪

0.7500 0.2500 0 −1 2 0.7500 0.2500
(7–40)

The corresponding eigenvalue problem (7-30) becomes

K̃1 Q1 = M̃1 Q1 R1 ⇒
     
   
1.2500 0.7500 (1) (2) 0.5625 0.4375 (1) (2) ρ1,1 0
q1 q1 = q1 q1 ⇒
0.7500 1.2500 0.4375 0.5625 0 ρ2,1
    √ 
2 0 λ 0 2
−2
R1 = = 1 , Q1 = √2
2
(7–41)
0 4 0 λ2 2 2

The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (7-34)

⎡ ⎤ ⎡ √2 ⎤
0.2500 0.7500  √2  −1  
⎢ ⎥ −2 2
⎢ √2 ⎥
Φ1 = Φ̄1 Q1 = ⎣0.5000 0.5000⎦ √22 = ⎣ 2 0⎦ = Φ Φ
(1) (2)
(7–42)
2 √
0.7500 0.2500 2 2
2 1

(7-41) and (7-42) indicate the exact eigenvalues and eigenmodes, cf. (1-87). Hence, convergence is obtained in just
a single iteration. This is so because the start subspace V 0 , spanned by the vector basis Φ 0 contains the eigenmodes
Φ(1) and Φ(2) as shown by (7-28). This property is singular to the subspace iteration algorithm compared to the
simultaneous inverse vector iteration technique.

Next, let us perform the same calculations using the start basis

⎡ ⎤
  1 −1
(1) (2) ⎢ ⎥
Φ0 = Φ0 Φ0 = ⎣2 2⎦ (7–43)
3 −3

The simultaneous inverse vector iteration (7-29) provides, cf. (7-19)

⎡ ⎤⎡ ⎤ ⎡ ⎤
0.2917 0.1667 0.0417 1 −1 0.7500 −0.0833
⎢ ⎥⎢ ⎥ ⎢ ⎥
Φ̄1 = AΦ0 = ⎣0.0833 0.3333 0.0833⎦ ⎣2 2⎦ = ⎣1.0000 0.3333⎦ (7–44)
0.0417 0.1667 0.2917 3 −3 1.2500 −0.5833

The projected mass and stiffness matrices become


162 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎫
0.7500 −0.0833 1
0.7500 −0.0833   ⎪
0 0 ⎪

⎢ ⎥ ⎢2 ⎥⎢ ⎥ 2.0625 −0.0625 ⎪

M̃1 = ⎣1.0000 0.3333⎦ ⎣ 0 1 0 ⎦ ⎣1.0000 0.3333⎦ = ⎪

−0.0625 0.2847 ⎪

1.2500 −0.5833 0 0 1
1.2500 −0.5833 ⎪

2
(7–45)
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪
0.7500 −0.0833 2 −1 0.7500 −0.0833  ⎪


4.2500 −0.2500 ⎪
0
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎪

K̃1 = ⎣1.0000 0.3333⎦ ⎣−1 4 −1⎦ ⎣1.0000 0.3333⎦ = ⎪

−0.2500 1.5833 ⎪

1.2500 −0.5833 0 −1 2 1.2500 −0.5833

The solution of the corresponding generalized eigenvalue problem (7-30) becomes


   
2.0534 0 −0.6982 −0.0254
R1 = , Q1 = (7–46)
0 5.5656 −0.0851 −1.8784

The estimate of the lowest eigenmode after the 1st iteration becomes, cf. (7-34)

⎡ ⎤ ⎡ ⎤
0.7500 −0.0833   −0.5165 0.1375
⎢ ⎥ −0.6982 −0.0254 ⎢ ⎥
Φ1 = Φ̄1 Q1 = ⎣1.0000 0.3333⎦ = ⎣−0.7265 −0.6516⎦ (7–47)
−0.0851 −1.8784
1.2500 −0.5833 −0.8231 1.0640

Correspondingly, after the 2nd, 7th and 14th iteration steps the following matrices are calculated
   ⎫
2.0118 0 −2.0171 0.1513 ⎪ ⎪

R2 = , Q2 = ⎪
0 5.2263 −0.0887 −5.3145 ⎪ ⎪



⎡ ⎤ (7–48)
0.6195 0.0821 ⎪

⎢ ⎥ ⎪

Φ2 = ⎣0.7241 0.5686⎦ ⎪




0.7535 −1.1604

   ⎫
2.0000 0 −2.0000 0.0011 ⎪


R7 = , Q7 = ⎪
0 4.0533 −0.0007 −4.0661 ⎪




⎡ ⎤ (7–49)
−0.7067 −0.8711 ⎪

⎢ ⎥ ⎪

Φ7 = ⎣−0.7074 −0.1155⎦ ⎪




−0.7069 1.1020

   ⎫
2.0000 0 −2.0000 0.0000 ⎪


R14 = , Q14 = ⎪
0 4.0002 −0.0000 −4.0002 ⎪




⎡ ⎤ (7–50)
0.7071 0.9931 ⎪

⎢ ⎥ ⎪

Φ14 = ⎣0.7071 0.0068⎦ ⎪




0.7071 −1.0068
7.3 Subspace Iteration 163

As seen the subspace iteration process determines the 1st eigenvalue and eigenvector after 7 iteration, whereas the
2nd eigenvector has not yet been calculated with the sufficiently accuracy even after 14 iterations. By contrast
the simultaneous inverse vector iteration managed to achieve convergence for this quantity after 14 iterations, see
(7-26).

The 2nd calculated eigenvalue becomes ρ 2,14 = 4.0002. Then, let µ = 4.05 and perform a Gauss factorization of
the matrix K − 4.05M, i.e.

⎡ ⎤
−0.0250 −1.0000 0.0000
⎢ ⎥
K − 4.05M = ⎣−1.0000 −0.0500 −1.0000⎦ =
0.0000 −1.0000 −0.0250
⎡ ⎤⎡ ⎤⎡ ⎤
1 0 0 −0.0250 0 0 1 40 0
⎢ ⎥⎢ ⎥⎢ ⎥
LDLT = ⎣40 1 0⎦ ⎣ 0 39.950 0 ⎦ ⎣0 1 −0.0250⎦ (7–51)
0 −0.0250 1 0 0 −0.0500 0 0 1

It follows that two components in the main diagonal of D are negative, from which is concluded that two eigen-
values are smaller than µ = 4.05. In turn this means that the two eigensolutions obtained by (7-46) are indeed the
lowest two eigensolutions of the original system.

Finally, consider the start vector basis

⎡ ⎤
  0 2
(1) (2) ⎢ ⎥
Φ0 = Φ0 Φ0 = ⎣−1 −1⎦ (7–52)
2 0

Now,

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎫
√ 1 1
0 0 0 ⎪

2 ⎢ ⎥ ⎢2 ⎥⎢ ⎥ ⎪

Φ(1) T MΦ0 0 ⎦ ⎣−1⎦ = 0⎪
(1)
= ⎣1⎦ ⎣ 0 1 ⎪

2 ⎪

1 0 0 1
2 ⎪

2
(7–53)
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪

√ 1 1 ⎪

2 ⎢ ⎥ ⎢2
0 0 2 ⎪

⎥⎢ ⎥ ⎪
Φ(1) T MΦ0 0 ⎦ ⎣−1⎦ = 0⎪
(2)
= ⎣1⎦ ⎣ 0 1 ⎪

2 1

1 0 0 2 0

It follows that the lowest eigenmode Φ (1) is M-orthogonal to the selected start vector basis. Hence, it should be
expected that the algorithm converges towards Φ (2) and Φ(3) . Moreover, in the present three dimensional case a
start subspace, which is M-orthogonal to Φ (1) , must contain Φ(2) and Φ(3) . Actually, cf. (1-87)

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎫
−1 0 2 ⎪

⎢ ⎥ 1 ⎢ ⎥ 1 ⎢ ⎥ 1 1 ⎪

Φ(2)
(1) (2)
= ⎣ 0⎦ = − · ⎣−1⎦ + · ⎣−1⎦ = − · Φ0 + · Φ0 ⎪

2 2 2 2 ⎪



1 2 0 ⎬
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ (7–54)


√ 1 √ 0 √ 2 √ √ ⎪

2⎢ ⎥ 2 ⎢ ⎥ 2 ⎢ ⎥ 2 2 ⎪

Φ(3) = ⎣−1⎦ = · ⎣−1⎦ + · ⎣−1⎦ = · Φ
(1)
+
(2)
· Φ0 ⎪

2 4 4 4 0
4 ⎪


1 2 0
164 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

Hence, convergence towards Φ (2) and Φ(3) should take place in a single iteration step. Actually, after the 1st
subspace iteration the following matrices are calculated

   ⎫
4 0 −2.0000 −2.1213 ⎪


R1 = , Q1 = ⎪
0 6 −2.0000 −2.1213 ⎪




⎡ ⎤ (7–55)
1.0000 −0.7071 ⎪

⎢ ⎥ ⎪

Φ2 = ⎣ 0.0000 0.7071⎦ ⎪




−1.0000 −0.7071

The 2nd calculated eigenvalue becomes ρ 2,1 = 6. In order to check whether ρ 2,1 = λ2 or ρ2,1 = λ3 we choose
µ = 6.05, and perform a Gauss factorization of the matrix K − 6.05M, i.e.

⎡ ⎤
−1.0250 −1.0000 0.0000
⎢ ⎥
K − 6.05M = ⎣−1.0000 −2.0500 −1.0000⎦ =
0.0000 −1.0000 −1.0250
⎡ ⎤⎡ ⎤⎡ ⎤
1 0 0 −1.0250 0 0 1 0.9756 0
⎢ ⎥⎢ ⎥⎢ ⎥
LDLT = ⎣0.9756 1 0⎦ ⎣ 0 −1.0744 0 ⎦ ⎣0 1 0.9308⎦ (7–56)
0 0.9308 1 0 0 −0.0942 0 0 1

It follows that three components in the main diagonal of D are negative, from which is concluded that the largest of
the two calculated eigenvalues must be equal to the largest eigenvalue of the original system, i.e. ρ 2,1 = λ3 . Still,
we do not know whether ρ 1,1 = λ1 or ρ1,1 = λ2 . In order to investigate this another calculation is performed with
µ = 4.05. The Gauss factorization of the matrix K − 4.05M has already been performed as indicated by (7-51).
Since this result shows that two eigenvalues exist, which are smaller than µ = 4.05, ρ 1,1 = 4 must be the largest
of these, and hence the 2nd eigenvalue of the original system.

7.4 Characteristic Polynomial Iteration


In this section it is assumed that the stiffness and mass matrices have been reduced to a three
diagonal form through a series of similarity transformations as explained in Section 6.4, corre-
sponding to, cf. (6-32)

⎡ ⎤
α1 β1 0 ··· 0 0
⎢ ⎥
⎢ β1 α2 β2 ··· 0 0 ⎥
⎢ ⎥
⎢ 0 β2 α3 ··· 0 0 ⎥
K=⎢
⎢ .. .. .. .. .. .. ⎥
⎥ (7–57)
⎢. . . . . . ⎥
⎢ ⎥
⎣0 0 0 · · · αn−1 βn−1 ⎦
0 0 0 · · · βn−1 αn
7.4 Characteristic Polynomial Iteration 165

⎡ ⎤
γ1 δ1 0 ··· 0 0
⎢ ⎥
⎢ δ1 γ2 δ2 ··· 0 0 ⎥
⎢ ⎥
⎢ 0 δ2 γ3 ··· 0 0 ⎥
M=⎢
⎢ .. .. .. .. .. .. ⎥
⎥ (7–58)
⎢. . . . . . ⎥
⎢ ⎥
⎣0 0 0 · · · γn−1 δn−1 ⎦
0 0 0 · · · δn−1 γn

The Helmholz reduction in Section 6.4 results in M = I. The slightly more general case,
where M is three diagonal has been assumed in what follows. In principle polynomial iteration
methods works equally well on fully populated stiffness and mass matrices. However, the com-
putational efforts become too extensive to make them competitive in this case.

Now, the characteristic equation of the generalized eigenvalue problem can be written in the
following form, cf. (1-10)

P (λ) = P (0)
(λ) = det K − λM =
⎛⎡ ⎤⎞
α1 − λγ1 β1 − λδ1 0 ··· 0 0
⎜⎢ ⎥⎟
⎜⎢ β1 − λδ1 α2 − λγ2 β2 − λδ2 ··· 0 0 ⎥⎟
⎜⎢ ⎥⎟
⎜⎢ 0 β2 − λδ2 α3 − λγ3 ··· 0 0 ⎥⎟
det ⎜⎢
⎜⎢ .. .. .. .. .. .. ⎥⎟ =
⎥⎟
⎜⎢ . . . . . . ⎥⎟
⎜⎢ ⎥⎟
⎝⎣ 0 0 0 · · · αn−1 − λγn−1 βn−1 − λδn−1 ⎦⎠
0 0 0 · · · βn−1 − λδn−1 αn − λγn
   2
αn − λγn · P (1) (λ) − βn−1 − λδn−1 · P (2) (λ) (7–59)

The last statement in (7-59) is obtained by expanding the determinant after the components in
the last row. P (1) (λ) and P (2) (λ) denote the characteristic polynomials obtained by omitting
the last row and column, and the last two rows and columns in the matrix K − λM, respec-
tively, cf. (3-26). The validity of the result (7-59) has been demonstrated for a 4-dimensional
case in Example 7.3. In turn, P (1) (λ) may be expressed in terms of P (2) (λ) and P (3) (λ) by a
similar expression. Actually, the complete Sturm sequence of characteristic polynomials may
be calculated recursively from the algorithm
166 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

  ⎫
P (n−1) (λ) = α1 − λγ1 ⎪

     ⎬
2
P (n−2)
(λ) = α1 − λγ1 α2 − λγ2 − β1 − λδ1
   2 ⎪

P (n−m) (λ) = αm − λγm · P (n−m+1) (λ) − βm−1 − λδm−1 · P (n−m+2) (λ) , m = 3, 4, . . . , n⎭
(7–60)

The effectiveness of characteristic polynomial iteration methods for matrices on three diagonal
form relies on the result (7-60).
 
Assume, that the jth eigensolution λj , Φ(j) is wanted. At first one needs to determine two
figures µ0 and µ1 fulfilling λj−1 < µ0 < λj < µ1 < λj+1 . This is done based on the sequence of
signs sign(P (n) (µ)), sign(P (n−1) (µ)), . . . , sign(P (1) (µ)), sign(P (0) (µ)), in which the number
of sign changes indicates the total number of eigenvalues smaller than µ, and where P (n) (µ) is
a dummy positive figure, cf. Section 6.3.
Below, on Fig. 7-2 are marked two points µk−1 and µk on the λ-axis in the vicinity of the
eigenvalue searched for, which is λ1 in the illustrated case. The values of the characteristic
polynomial in these points, P (µ k−1) and P (µk ), may easilybe calculated by  means
 of (7-60)
(notice that P (µ) = P (0) (µ)). The line through the points µk−1 , P (µk−1) and µk , P (µk )
has the equation

  λ − µk
y(λ) = P (µk ) + P (µk ) − P (µk−1) (7–61)
µk − µk−1

P (λ)
  λ − µk
y(λ) = P (µk ) + P (µk ) − P (µk−1)
µk − µk−1

λ1 λ2 λ3
λ
µk−1 µk
µk+1

Fig. 7–2 Secant iteration of characteristic equation towards λ 1 .

The line defined by (7-61) intersects the λ-axis at the point µ k+1. It is clear that this point will be
closer to λj than both µk−1 and µk . The intersection point of the line with the λ-axis is obtained
as the solution to the equation y(λ) = y(µ k+1) = 0, which is given as

P (µk )  
µk+1 = µk − µk − µk−1 (7–62)
P (µk ) − P (µk−1)
Next, the iteration index is raised to k + 1, and a new intersection point µ k+2 is obtained. The
sequence µ0 , µ1 , µ2, . . . converges relatively fast to the eigenvalue λj as demonstrated below in
7.4 Characteristic Polynomial Iteration 167

Example 7.4.

Box 7.4: Characteristic polynomial iteration algorithm

In order to calculate the jth eigenvalue λj and the jth eigenvector Φ(j) the following items
are performed
(n) (n−1)
 (1)
1. Based on the sequence of signs sign(P (µ)), sign(P (µ)), . . . , sign P (µ)),
sign P (0) (µ)) of the Sturm sequence of characteristic polynomials determine two
figures µ0 and µ1 fulfilling the inequalities:
λj−1 < µ0 < λj < µ1 < λj+1

2. Perform secant iteration in search for λj = µ∞ according to the algorithm:


 
µk+1 = µk − P (µk P)−P
(µk )
(µk−1 )
µk − µk−1

3. Determine the unnormalized eigenmode Φ̄(j) from the algorithm (7-64).

4. Normalize the eigenmode to unit modal mass:


Φ̄(j)
Φ(j) = √
Φ̄(j) T MΦ̄(j)

Alternatively, the eigenvalue λj may be determined by means of Sturm sequence check, where
the interval ]µ0 , µ1 [ is increasingly narrowed around the eigenvalue λj by bisection of the pre-
vious interval. This algorithm, which is merely telescope method described in Section 6.2, will
generally converge much slower than the secant iteration algorithm.
 
(j) (j) (j)
Finally, the components Φ1 , Φ2 , . . . , Φn of the eigenmode Φ(j) are determined as non-
trivial solutions to the linear equations

K − λj M Φ(j) = 0 ⇒
⎡ ⎤⎡ (j)
⎤ ⎡ ⎤
α1 − λj γ1 β1 − λj δ1 0 ··· 0 0 Φ1 0
⎢ ⎥ ⎢ (j) ⎥ ⎢ ⎥
⎢ β1 − λj δ1 α2 − λj γ2 β2 − λj δ2 ··· 0 0 ⎥ ⎢ Φ2 ⎥ ⎢0⎥
⎢ ⎥ ⎢ (j) ⎥ ⎢ ⎥
⎢ 0 β2 − λj δ2 α3 − λj γ3 ··· 0 0 ⎥ ⎢ Φ3 ⎥ ⎢0⎥
⎢ ⎥ ⎢ . ⎥ = ⎢.⎥
⎢ .. .. .. .. .. .. ⎥ ⎢ . ⎥ ⎢.⎥
⎢ . . . . . . ⎥ ⎢ . ⎥ ⎢.⎥
⎢ ⎥ ⎢ (j) ⎥ ⎢ ⎥
⎣ 0 0 0 · · · αn−1 − λj γn−1 βn−1 − λj δn−1 ⎦ ⎣Φn−1 ⎦ ⎣0⎦
(j)
0 0 0 · · · βn−1 − λj δn−1 αn − λj γn Φn 0
(7–63)
168 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

(j) (j) (j)


Let Φ̄(j) = [Φ̄1 , Φ̄2 , . . . , Φ̄n ] denote the eigenmode with components arbitrarily normalized.
(j)
Setting Φ̄1 = 1 the equations (7-63) may be solved recursively from above by the following
algorithm


α1 − λj γ1 ⎪
(j)
Φ̄2 =− ·1 ⎪

β1 − λj δ1 ⎪



β1 − λj δ1 α2 − λj γ2 ⎬
(j) (j)
Φ̄3 =− ·1− · Φ̄2 (7–64)
β2 − λj δ2 β2 − λj δ2 ⎪



βm−2 − λj δm−2 αm−1 − λj γm−1 (j) ⎪

m = 4, . . . , n⎪

(j)
Φ̄(j)
m =− · Φ̄m−2 − Φ̄ ,
βm−1 − λj δm−1 βm−1 − λj δm−1 m−1

Hence, the determination of the components of the vector Φ̄(j) is almost free. Obvious, the
indicated algorithm breaks down, if any of the denominators β m−1 − λj δm−1 = 0. This means
that the algorithm should be extended with alternatives to deal with such exceptions.

Finally, the eigenmode Φ̄(j) should be normalized to unit modal mass as follows

Φ̄(j)
Φ(j) = √ (7–65)
Φ̄(j) T MΦ̄(j)

Example 7.3: Evaluation of determinant

The determinant of the following matrix on a three diagonal form of the dimension 4 × 4 is wanted

⎡ ⎤
α1 β1 0 0
⎢ ⎥
⎢ β1 α2 β2 0⎥
K=⎢ ⎥ (7–66)
⎣0 β2 α3 β3 ⎦
0 0 β3 α4

Expansion of the determinant after the components in the 4th row provides

⎛⎡ ⎤⎞ ⎛⎡ ⎤⎞
α1 β1 0 α1 β1 0
  ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟
det K = P (0) = α4 · det ⎝⎣ β1 α2 β2 ⎦⎠ − β3 · det ⎝⎣ β1 α2 0 ⎦⎠ =
0 β2 α3 0 β2 β3
⎛⎡ ⎤⎞
α1 β1 0 & '
⎜⎢ ⎥⎟ α1 β1
α4 · det ⎝⎣ β1 α2 β2 ⎦⎠ − β3 · det
2
= α4 · P (1) − β32 · P (2) (7–67)
β1 α2
0 β2 α3

(7-67) has the same recursive structure as described by (7-59).


7.4 Characteristic Polynomial Iteration 169

Example 7.4: Characteristic polynomial iteration

The generalized eigenvalue problem defined in Example 1.4 is considered again. Calculate the 3rd eigenvalue by
secant iteration on the characteristic polynomial, and next determine the corresponding eigenvector.

At first a calculation with µ = 2.5 is performed, which produces the following results

⎡ ⎤ ⎫
0.7500 −1.0000 0.0000 ⎪

⎢ ⎥ ⎪

K − 2.5M = ⎣−1.0000 1.5000 −1.0000⎦ ⇒ ⎪



0.0000 −1.0000 0.7500 ⎪





⎧ (3)

⎪ P (2.5) = 1 , sign(P (3) (2.5)) = + ⎪ (7–68)

⎪ ⎪

⎨ P (2) (2.5) = 0.7500 , sign(P (2) (2.5)) = + ⎪






⎪ P (1) (2.5) = 0.7500 · 1.5000 − (−1)2 = 0.1250 , sign(P (2.5)) = + ⎪
(1)


⎪ ⎪

⎩ (0) ⎭
P (2.5) = 0.7500 · 0.1250 − (−1)2 · 0.7500 = −0.6563 , sign(P (2.5)) = −
(0)

Hence, the sign sequence of the Sturm sequence becomes + + +−. One sign change occurs in this sequence from
which is concluded that the lowest eigenvalue λ 1 is smaller than µ = 2.5.

Next, a calculation with µ = 5.5 is performed, which provided the results

⎡ ⎤ ⎫
−0.7500 −1.0000 0.0000 ⎪

⎢ ⎥ ⎪

K − 5.5M = ⎣−1.0000 −1.5000 −1.0000⎦ ⇒ ⎪



0.0000 −1.0000 −0.7500 ⎪





⎧ (3)

⎪ P (5.5) = 1 , sign(P (3) (5.5)) = + ⎪ (7–69)

⎪ ⎪

⎨ P (2) (5.5) = −0.7500 , sign(P (2) (5.5)) = − ⎪



  ⎪


⎪ P (1) (5.5) = (−0.7500) · − 1.5000 − (−1)2 = 0.1250 , sign(P (5.5)) = + ⎪
(1)


⎪ ⎪

⎩ (0) ⎭
P (5.5) = (−0.7500) · 0.1250 − (−1)2 · (−0.7500) = 0.6563 , (0)
sign(P (5.5)) = +

Now, the sign sequence of the Sturm sequence becomes + − ++, in which two sign changes occur, from which is
concluded that the lowest two eigenvalues λ 1 and λ2 are both smaller than µ = 5.5.

Finally a calculation with µ = 6.5 is performed, which provided the results

⎡ ⎤ ⎫
−1.2500 −1.0000 0.0000 ⎪

⎢ ⎥ ⎪

K − 6.5M = ⎣−1.0000 −2.5000 −1.0000⎦ ⇒ ⎪



0.0000 −1.0000 −1.2500 ⎪





⎧ (3)

⎪ P (6.5) = 1 , sign(P (3) (6.5)) = + ⎪ (7–70)

⎪ ⎪

⎨ P (2) (6.5) = −1.2500 , sign(P (2) (6.5)) = − ⎪



  ⎪


⎪ P (1) (6.5) = (−1.2500) · − 2.5000 − (−1)2 = 2.1250 , sign(P (6.5)) = + ⎪
(1)


⎪ ⎪

⎩ (0) ⎭
P (6.5) = (−1.2500) · 2.1250 − (−1)2 · (−1.2500) = −1.4063 , sign(P (6.5)) = −
(0)

In this cse the sign sequence of the Sturm sequence becomes + − +−, corresponding to three sign changes. Hence,
it is concluded that all three eigenvalues λ 1 , λ2 and λ3 are smaller than µ = 6.5.
170 Chapter 7 – SOLUTION OF LARGE EIGENVALUE PROBLEMS

From the Sturm sequence checks it is concluded that 5.5 < λ 3 < 6.5. Then, we may use the following start
values, µ0 = 5.5 and µ1 = 6.5, in the secant iteration algorithm. Moreover P (5.5) = P (0) (5.5) = 0.6563 and
P (6.5) = P (0) (6.5) = −1.4063, cf. (7-69) and (7-70). Then, from (7-61) it follows for k = 1

(−1.4063)
µ2 = 6.5 − (6.5 − 5.5) = 5.8182 (7–71)
(−1.4063) − 0.6563

Next, P (µ2 ) = P (5.8182) = 0.3156 is calculated by means of the algorithm (7-60), and a new value µ 3 can be
obtained from

0.3156
µ3 = 5.8182 − (5.8182 − 6.5) = 5.9431 (7–72)
0.3156 − (−1.4063)

During the next 5 iterations the following results were obtained


µ4 = 6.00900500472288 ⎪


µ5 = 5.99960498912941 ⎪


µ6 = 5.99999734553262 (7–73)


µ7 = 6.00000000078659 ⎪



µ8 = 6.00000000000000

As seen the convergence of the secant iteration algorithm is very fast.

The linear equation (7-63) attains the form

K − 6.0000M Φ̄(3) = 0 ⇒
⎡ ⎤ ⎡ (3) ⎤ ⎡ ⎤
−1 −1 0 Φ̄1 0
⎢ ⎥ ⎢ (3) ⎥ ⎢ ⎥
⎣−1 −2 −1⎦ ⎣Φ̄2 ⎦ = ⎣0⎦ (7–74)
(3)
0 −1 −1 Φ̄3 0

(3)
Setting Φ̄1 = 1 the algorithm (7-64) now provides


(−1) ⎪ ⎡ ⎤
(3)
Φ̄2 =− · 1 = −1 ⎪

(−1) ⎬ 1
⎢ ⎥
⇒ Φ̄(3) = ⎣−1⎦ (7–75)
(−1) (−2) ⎪

· (−1) = 1⎪
(3)
Φ̄3 =− ·1− ⎭ 1
(−1) (−1)

Normalization to unit modal mass provides, cf. (1-87)

⎡ ⎤
√ 1
2⎢ ⎥
Φ(3) = ⎣−1⎦ (7–76)
2
1
7.5 Exercises 171

7.5 Exercises
7.1 Given the following mass- and stiffness matrices defined in Exercise 4.2.
(a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous
inverse vector iteration with the start vector basis
⎡ ⎤
(1) (2) ⎢ 1 1

Φ0 = Φ0 Φ0 = ⎣1 0⎦
1 −1

7.2 Given the symmetric matrices M and K of dimension n.


(a.) Write a MATLAB program, which for given start basis performs simultaneous inverse
vector iteration for the determination of the lowest n 1 eigenmodes and eigenvalues.

7.3 Consider the general eigenvalue problem in Exercise 4.2.


(a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace it-
eration using the same start basis as in Exercise 7.1.

7.4 Given the symmetric matrices M and K of dimension n.


(a.) Write a MATLAB program, which for given start basis performs subspace iteration
for the determination of the lowest n1 eigenmodes and eigenvalues.

7.5 Consider the general eigenvalue problem in Exercise 4.2.


(a.) Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope
method).

7.6 Given the symmetric matrices M and K of dimension n on three diagonal form.
(a.) Write a MATLAB program, which performs Sturm sequence check and secant iter-
ation iteration for the determination of the jth eigenvalue, and next determines the
corresponding eigenvector.
Index

acceleration, 18 damping matrix, 7


acceleration vector, 7 decoupling condition, 26, 36
adjoint eigenvalue problem, 15, 24 degree of freedom, 69, 147
aerodynamic damping load, 8 diagonal matrix, 91, 151, 157, 158
aerodynamic damping matrix, 8 Dirac’s delta function, 8
alternative inverse vector iteration, 96, 97 displacement difference equation form of New-
amplification matrix, 34, 37, 50 mark algorithm, 40
argument of complex eigenvalue, 45 displacement difference form of Newmark algo-
rithm, 38
central difference algorithm, 32, 40, 44, 46, 47, displacement vector, 7
49 dynamic load vector, 7, 45
central difference operator, 19, 40
characteristic equation, 9, 22, 61, 64, 89, 148, eigenmode, 10, 115
165 eigenvalue, 9
characteristic polynomial, 9, 53, 59, 165 eigenvalue separation principle, 53, 59, 62
characteristic polynomial iteration, 90, 129, 148, error analysis of calculated eigenvalues, 69, 83,
164, 166, 167, 169 87
Choleski decomposition, 66, 67, 71 error vector, 41, 83
commutative matrix multiplication, 15 Euclidean matrix norm, 86
compatible matrix and vector norms, 83, 86 Euclidean vector norm, 83, 85
complex conjugation, 13
complex modal mass, 15 forward vector iteration, 90, 99–101
complex unit, 8 forward vector iteration with Gram-Schmidt or-
conditional stable algorithm, 44, 49 thogonalization, 109, 110, 129
consistent mass matrix, 19 forward vector iteration with shift, 104
convergence rate of iteration vector, 93, 103, 104, Fourier transform, 8
109, 153, 156 frequency response matrix, 8
convergence rate of Rayleigh quotient, 94, 103, fundamental matrix, 12
106, 153
convergence rate of the iteration vector, 159 Galerkin variation, 19
Crank-Nicolson algorithm, 32, 40, 42, 44, 46– Gauss factorization, 53, 55, 60, 66, 160, 163,
48 164
cubic convergence, 106 general Jacobi iteration method, 116, 122, 126,
158, 159
damped eigenmode, 13 generalized eigenvalue problem, 159
damped eigenperiod, 45 generalized alpha algorithm, 32, 49
damped eigenvalue, 13 generalized eigenvalue problem, 9, 22, 53, 57,
damped modal coordinate, 27 58, 63, 65, 71, 78, 90, 95–97, 100, 107,

172
INDEX 173

115, 116, 122, 126, 128, 133, 134, 139, negative numerical damping, 46
147, 149, 154, 156, 162, 165 Newmark algorithm, 31, 34
Gram-Schmidt orthogonalization, 109, 138, 149, numerical accuracy, 31, 41
160 numerical damping, 31, 46
Guyan reduction, 69 numerical stability, 31, 42

Hilbert matrix norm, 83, 86, 87 omission criteria for similarity transformation,
HOQR iteration method, 116, 140 119, 120, 125, 126
Householder reduction method, 128, 133, 134, one matrix norm, 86
140, 149 one vector norm, 85
orthogonality property, 10, 15, 25, 91
impulse response matrix, 8
orthonormal matrix, 11, 116, 118, 129, 136
infinity matrix norm, 86
infinity vector norm, 85 p vector norm, 85
initial value vector, 7, 18 partitioned matrix, 69
inverse vector iteration, 90, 95, 101, 149 period distortion, 45
inverse vector iteration with Gram-Schmidt or- period distortion per period, 45
thogonalization, 109, 110, 129 period elongation, 45
inverse vector iteration with Rayleigh quotient period shortening, 45
shift, 106, 107 permutation of numbers, 117
inverse vector iteration with shift, 102, 103, 106 positive definite matrix, 7, 66
iterative similarity transformation method, 115, positive numerical damping, 46
116 positive semi-definite matrix, 8
Jordan boxes, 11 projected mass matrix, 77, 81, 82, 156, 159–161
Jordan normal form, 11 projected stiffness matrix, 77, 81, 82, 156, 159–
161
Kronecker’s delta, 136
QR iteration method, 129, 136, 139, 140
linear convergence, 93, 109 quadratic convergence, 95
linear viscous damping, 7 quasi-static response, 26
lower triangular matrix, 53, 56, 66, 67, 72
lumped mass matrix, 19 Rayleigh quotient, 74, 75, 89, 91, 93, 96, 97, 99,
100, 102, 106, 107, 112, 150
M-orthonormalization of vector basis, 152 Rayleigh’s principle, 74
mass matrix, 7, 19, 129, 164 Rayleigh-Ritz analysis, 69, 74, 76, 79, 80, 148,
matrix exponential function, 12, 37 156, 160
matrix norm, 85 relative error of iteration vector, 93, 96
modal coordinate, 25, 26, 76, 77, 91 relative error of Rayleigh quotient, 94, 96
modal damping ratio, 36 relatively errors of Rayleigh quotient, 101
modal damping ration, 25 Ritz basis, 76, 79, 82, 148, 156
modal load, 36
modal loads, 26 secant iteration, 167, 169, 170
modal mass, 10, 23, 36, 71, 75, 79, 83, 87, 90, shift, 53
99, 107, 109, 110, 115, 150, 152, 153, shift on stiffness matrix, 63, 64, 101
159, 167 similarity transformation, 11, 53, 65, 72, 90, 115,
modal matrix, 10, 22, 87, 115, 147 117, 122, 128, 137, 139, 164
modal space, 102 similarity transformation matrix, 65, 115, 117,
multi step algorithm, 31 122, 129, 133, 134, 137, 139
multi value algorithm, 31 simple eigenvalues, 9
174 INDEX

simultaneous inverse vector iteration, 147–149, vector iteration with shift, 101
152–154, 156, 158–161 vector norm, 85
single step algorithm, 31 velocity, 18
single step multi value algorithm, 31, 37 velocity vector, 7
single value algorithm, 31 vibrating string, 18
special eigenvalue problem, 9, 13, 65, 66, 84,
87, 116, 118, 120, 128, 129, 133, 134, wave equation, 18
136, 139
special Jacobi iteration method, 116, 118–120,
129
spectral decomposition, 66
spectral matrix norm, 86
standard eigenvalue problem, 149
state vector, 12
state vector formulation, 12
static condensation, 69, 72, 80
stiff-body motion, 64
stiffness matrix, 7, 19, 131, 136, 164
Sturm sequence, 53, 59, 160, 165, 167
Sturm sequence check, 61, 103, 149, 153, 159,
160, 167, 169, 170
Sturm sequence iteration, 149
subspace iteration, 139, 147, 148, 152, 156, 159–
161
sweep, 120, 121, 125, 127
symmetric matrix, 7, 128, 129
system reduction, 28, 29

telescope method, 55, 167


three diagonal matrix, 128, 133, 140, 149, 164,
168
transfer matrix, 42
transposed matrix, 15
two vector norm, 85

unconditional stable algorithm, 44


undamped circular eigenfrequency, 9, 36
undamped eigenmode, 25
undamped eigenvibration, 9
unit matrix, 8, 12
unitary matrix, 11
upper three diagonal matrix, 140
upper triangular matrix, 56, 136, 149–151

vector basis, 75, 76, 138, 148


vector iteration method, 89
vector iteration with deflation, 109
vector iteration with Gram-Schmidt orthogonal-
ization, 109
A PPENDIX A
Solutions to Exercises

A.1 Exercise 1.1


Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
1 0 0 2 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 0 ⎦ , K = ⎣−1 2 0⎦ (1)
0 0 12 0 0 3
1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
2. Determine two vectors that are M-orthonormal, but are not eigenmodes.

SOLUTIONS:

Question 1:

The generalized eigenvalue problem (1-9) becomes

⎡ ⎤ ⎡ (j) ⎤ ⎡ ⎤
2 − λj −1 0 Φ1
⎢ ⎥⎢ ⎥ ⎢0⎥
⎣ −1 2 − 2λj ⎢ (j) ⎥
0 ⎦ ⎣Φ2 ⎦ = ⎣0⎦ (2)
(j)
0 0 3 − 12 λj Φ3 0

Upon evaluating the determinant of the coefficient matrix after the 3rd row, the characteristic equation
(1-10) becomes

⎛⎡ ⎤⎞
2 − λj −1 0
⎜⎢ ⎥⎟
P (λ) = P (0) (λ) = det ⎝⎣ −1 2 − 2λj 0 ⎦⎠ =
0 0 3 − 12 λj
1
 
 2
1

3 − λj 2 − λj 2 − 2λj − − 1 = 3 − λj 3 − 6λj + 2λ2j = 0 ⇒


2 2
⎧  √ 


1
3− 3 , j =1
⎨2 √ 
λj = 12 3 + 3 , j = 2 (3)



6 , j=3

— 175 —
176 Chapter A – Solutions to Exercises

The largest eigenvalue λ3 = 6 is obtained when the 1st factor in (3) is equal to 0, whereas the two lowest
solutions corresponds vanishing of the 2nd factor.

Because the 3rd eigenmode is decoupled from the 1st and 2nd the solution method is slightly different in
this case. As seen be inspection the solutions have the form

⎡ ⎤ ⎫
(j)
Φ1 ⎪

⎢ ⎥ ⎪

Φ̄(j) = ⎣Φ(j)
2 ⎦
, j = 1, 2 ⎪




0 ⎪

⎡ ⎤ (4)


0 ⎪

⎢ ⎥ ⎪

Φ̄(3) = ⎣0⎦ ⎪




1

(j) (j)
The 1st and 2nd components of the 1st and 2nd eigenmodes, Φ1 and Φ2 are determined from the two
(j) (j)
first equations in (2). We choose to set Φ1 = 1, and determine Φ2 from the 1st equations. Notice that
(j)
we may as well have determined Φ2 from the 2nd equation. Then

  √ 
  (j) (j)
1
2 1 + 3 , j=1
2 − λj · 1 − Φ2 =0 ⇒ Φ2 =  √  (5)
1
2 1 − 3 , j=2

The modal masses become

⎡ ⎤T ⎡ ⎤⎡ ⎤
 √
1 1 0 0 1
3+ 3 , j=1
⎢ ⎥ ⎢ ⎥ ⎢ (j) ⎥ (j) 2
Mj = Φ̄(j) T MΦ̄(j) = ⎣Φ(j)
2 ⎦ ⎣0 2 0 ⎦ ⎣Φ2 ⎦ = 1 + 2 Φ2 = √ (6)
1 3− 3 , j=2
0 0 0 2 0

⎡ ⎤T ⎡ ⎤⎡ ⎤
0 1 0 0 0
⎢ ⎥ ⎢ ⎥⎢ ⎥ 1
M3 = Φ̄(3) T MΦ̄(3) = ⎣0⎦ ⎣0 2 0 ⎦ ⎣0⎦ = (7)
2
1 0 0 12 1

Φ(1) denotes the 1st eigenmode normalized to unit modal mass. This is related to Φ̄(1) in the following
way

⎡ ⎤ ⎤ ⎡
1
0.4597
1 1 ⎢  √ ⎥ ⎢ ⎥
Φ(1) = √ Φ̄(1) = √ ⎣ 12 1 + 3 ⎦ = ⎣0.6280⎦ (8)
M1 3+ 3
0 0

The other modes are treated in the same manner, which results in the following eigensolutions
A.2 Exercise 1.2 177

⎡ ⎤ ⎡1 √  ⎤ ⎫
λ1 0 0 3− 3 0 0 ⎪

⎢ ⎥ ⎢
2
 √  ⎥ ⎪

Λ = ⎣ 0 λ2 0⎦=⎣ 1
0⎦ ⎪

0 2 3+ 3 ⎪



0 0 λ3 0 0 6 ⎪

(9)
⎡ ⎤⎪

0.4597 0.8881 0 ⎪

 
⎢ ⎥⎪


Φ= Φ Φ Φ ⎦⎪
(1) (2) (3)
= ⎣0.6280 −0.3251 0 ⎪



0 0 1.4142

Question 2:

Consider the vectors

⎡ ⎤ ⎡ ⎤
1 0

⎢ ⎥ ⎢ 2⎥
v1 = ⎣0⎦ , v2 = ⎣ 2 ⎦
(10)
0 0

Upon insertion the following relations are seen to be valid

v1T Mv1 = 1 , v2T Mv2 = 1 , v1T Mv2 = 0 (11)

Hence, v1 and v2 are mutually M-orthonormal. However,

⎡ ⎤ ⎡1 √ ⎤ ⎫
2 3− 3 ⎪

⎢ ⎥ ⎢
2
⎥ ⎪

Kv1 = ⎣−1⎦ = λ1 Mv1 = ⎣ ⎦ ⎪

0 ⎪



0 0 ⎪

⎡ √ ⎤ ⎤⎪ ⎡
(12)
− 22 ⎪

⎢ ⎥
0
√ ⎪

√ ⎢ √  ⎥ ⎪
Kv2 = ⎣ 2 ⎦ = λ2 Mv2 = ⎣ 4 3 + 3 ⎦ ⎪
⎢ ⎥ 2 ⎪




0 0

Hence, neither v1 nor v2 are eigenmodes.

A.2 Exercise 1.2


The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional generalized
eigenvalue problem are given as
178 Chapter A – Solutions to Exercises

    √ √ 
λ1 0 1 0   2 2
Λ= = , Φ = Φ(1) Φ(2) = √2 √2 (1)
0 λ2 0 4 2
2
− 22

1. Calculate M and K.

SOLUTIONS:

Question 1:

From (1-19) and (1-20) follows

 T
M = Φ−1 mΦ−1 (2)

 T
K = Φ−1 kΦ−1 (3)

Since it is known that the eigenmodes have been normalized to unit modal mass it follows from (1-20)
and (1-22) that

m=I , k=Λ (4)

The inverse of the modal matrix becomes

√ √ −1 √ √ 
2 2 2 2
−1
Φ = √2 √2 = √2 √2 (5)
2
2
− 22 2
2
− 22
Of course, (5) can be obtained by direct calculation. Alternatively, the result may be obtained from the
following arguments. Notice that Φ is orthonormal, so Φ−1 = ΦT , cf. (1-23). Additionally, the modal
matrix is symmetric, i.e. Φ = ΦT , from which the indicated result follows.

Insertion of (4) and (5) into (1) and (2) provides

√ √ T   √ √   
2 2 2 2
1 0 1 0
M= √2 √2 √2 √2 = (6)
2
2
− 22 0 1 2
2
− 22 0 1

√ √ T   √ √   
2 2
1 0 2 2
2.5 −1.5
K= √2 √2 2 √ √2 = (7)
2
2
− 22 0 4 2
2
− 22 −1.5 2.5

Actually, since M = I, the considered eigenvalue problem is of the special type, cf. the remarks subse-
quent to (1-9).
A.3 Exercise 3.1 179

A.3 Exercise 3.1


Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
1 0 0 2 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 0 ⎦ , K = ⎣−1 2 0⎦ (1)
0 0 12 0 0 3
1. Show that the eigenvalue separation principle is valid for the considered example.

SOLUTIONS:

Question 1:
(0)
The eigenvalues λj , which have been calculated in Exercise 1.1, are

⎧  √ 


1
3− 3 , j=1
⎨ 
2
√ 
λj = 12 3 + 3 , j=2 (2)



6 , j=3
(1) (1)
Correspondingly, the eigenvalues λj and λj are calculated from

& (1)
'
2 − λj −1
P (1) (λ(1) ) = det (1) =
−1 2 − 2λj

 2
 (1) 2

(1)  (1) (1)


2 − λj 2 − 2λj − −1 = 3 − 6λj ) + 2 λj =0 ⇒
  √ 
2 3−
1
(1) 3 , j=1
λj =
1
 √  (3)
2 3+ 3 , j=2
 

(2) (2)
P (2) (λ(2) ) = det 2 − λj ⇒ λj = 2 (4)

Then, (3-25) attains the following forms for m = 0 and m = 1

(0) (1) (0) (1) (0)


0 ≤ λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ λ3 ≤ ∞ ⇒

1 √  1 √  1 √  1 √ 
0≤ 3− 3 ≤ 3− 3 ≤ 3+ 3 ≤ 1+ 3 ≤6≤∞ (5)
2 2 2 2

(1) (2) (1)


0 ≤ λ1 ≤ λ1 ≤ λ2 ≤ ∞ ⇒

1 √  1 √ 
0≤ 3− 3 ≤2 ≤ 3+ 3 ≤∞ (6)
2 2
(0) (1) (0) (1)
Hence, (3-25) holds for the considered example. λ1 = λ1 and λ2 = λ2 , because of the decoupling
of the 3rd eigenmode from the 1st and 2nd eigenmode.
180 Chapter A – Solutions to Exercises

A.4 Exercise 3.2


Given the following mass- and stiffness matrices

   
2 0 6 −1
M= , K= (1)
0 0 −1 4

1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass.


2. Perform a shift ρ = 3 on K and calculate the eigenvalues and eigenmodes of the new problem.

SOLUTIONS:

Question 1:

The generalized eigenvalue problem (1-9) is written on the form


     
(j) (j)
2 0 Φ1 1 6 −1 Φ1
(j) = (j) , j = 1, 2 (2)
0 0 Φ2 λj −1 4 Φ2

Obviously, (2) has the solution


   
(2)
Φ1 0
λ2 = ∞ , Φ (2)
= (2) = (3)
Φ2 1

Hence, λ2 = ∞ is an eigenvalue. This is so because the mass matrix is singular, and has zeroes in the
last row and column. Since, the modal mass M2 related to eigenmode Φ(2) is zero, this mode cannot be
normalized in the usual manner. In Section 4.1 the problem of infinite eigenvalues will be thoroughly
dealt with.

The other eigensolution may be obtained by the standard approach. Then, the eigenvalue problem (2) is
written on the form
   (1)   
6 − 2λ1 −1 Φ1 0
(1)
= (4)
−1 4 Φ2 0

The characteristic equation (1-11) becomes


& '
6 − 2λ1 −1    2 23
det = 4 6 − 2λ1 − − 1 = 23 − 8λ1 = 0 ⇒ λ1 = (5)
−1 4 8
A.4 Exercise 3.2 181

(1) (1)
We choose to set Φ1 = 1, and determine Φ2 from the 1st equations. Then

  (1) (1) 1
6 − 2λ1 · 1 − Φ2 = 0 ⇒ Φ2 = (6)
4

The modal mass becomes


 T   
1 2 0 1
M1 = Φ̄(1) T MΦ̄(1) = 1 =2 (7)
4 0 0 14

Then, the eigenmode normalized to unit modal mass Φ(1) becomes


   
1 1 1 0.7071
Φ(1) =√ Φ̄(1) = √ = (8)
M1 2 14 0.1768

Hence, the following eigensolutions have been obtained


     
λ1 0 23
0   0.7071 0
Λ= = 8
, Φ = Φ(1) Φ(2) = (9)
0 λ2 0 ∞ 0.1768 1

Question 2:

(3-38) attains the form


    
6 −1 2 0 0 −1
K̂ = K − 3M = −3 = (10)
−1 4 0 0 −1 4

The eigenvalue problem (3-37) becomes


&   '  (1)   
0 −1 2 0 Φ1 0
− λj = (11)
−1 4 0 0
(1)
Φ2 0

For the same reason as in question 1, λ2 = ∞ is still an eigenvalue with the eigenmode given by (3).
The characteristic equation for the 1st eigenvalue becomes, cf. (5)
& '
−2λ1 −1    2 1 23

det = 4 −2λ1 − −1 = −1−8λ1 = 0 ⇒ λ1 = − = −3 (12)


−1 4 8 8
182 Chapter A – Solutions to Exercises

(1) (1)
Let Φ1 = 1, and determine Φ2 from the 1st equations of (11)

  (1) (1) 1
− 2λ1 · 1 − Φ2 = 0 ⇒ Φ2 = (13)
4
which is identical to (6). Hence Φ(1) is unaffected by the shift as expected, cf. the comments following
(3-38). The eigensolutions are unchanged as given by (9), save that λ1 = − 18 .

A.5 Exercise 3.4: Theory


Gauss Elimination
Given a symmetric matrix K of the dimension n × n with the components Kij = Kji . Consider the
static equilibrium equation

Kx = f ⇒
⎡ ⎤⎡ ⎤ ⎡ ⎤
K11 K12 K13 · · · K1n x1 f1
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K21 K22 K23 · · · K2n ⎥ ⎢ x2 ⎥ ⎢ f2 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K31 K32 K33 · · · K3n ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ x3 ⎥ = ⎢ f3 ⎥ (1)
⎢ .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎣ . . . . . ⎦⎣ . ⎦ ⎣ . ⎦
Kn1 Kn2 Kn3 · · · Knn xn fn

In order to have a one at the 1st element of the main diagonal of the coefficient matrix the 1st equation is
divided with K11 resulting in

⎡ (1) (1) (1)


⎤ ⎡ ⎤ ⎡ (1) ⎤
1 K12 K13 · · · K1n x1 f1
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K21 K22 K23 · · · K2n ⎥ ⎢ x2 ⎥ ⎢ f2 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K31 · · · K3n ⎥ ⎢ ⎥ ⎢ ⎥
⎢ K32 K33 ⎥ ⎢ x3 ⎥ = ⎢ f3 ⎥ (2)
⎢ .. .. .. .. .. ⎦ ⎣ .. ⎦ ⎣ .. ⎥
. ⎥ ⎢ . ⎥ ⎢ .
⎣ . . . . ⎦
Kn1 Kn2 Kn3 · · · Knn xn fn

where


K1j
j = 2, . . . , n ⎪

(1)
K1j = , ⎪

K11
(3)
f1 ⎪

(1)
f1 = ⎪

K11

In turn, the 1st equation of (2) is multiplied with Ki1 , i = 2, . . . , n, and the resulting equation is
withdrawn from the ith equation. This will produce a zero in the ith row of the 1st column, corresponding
to the following system of equations
A.5 Exercise 3.4: Theory 183

⎡ (1) (1) (1) ⎤ ⎡


⎤ ⎡ (1) ⎤
1 K12 K13 · · · K1n x1 f1
⎢ (1) ⎥ ⎢ ⎥
⎢0 K22 · · · K2n ⎥ ⎢ x2 ⎥ ⎢f2(1) ⎥
⎥ ⎢
(1) (1)
⎢ K23 ⎥
⎢ ⎢ ⎥ ⎢ (1) ⎥
(1) ⎥ ⎢ ⎥
· · · K3n ⎥ ⎢ x3 ⎥ = ⎢f3 ⎥
(1) (1)
⎢0 K32 K33 (4)
⎢ ⎥⎢ . ⎥ ⎢ ⎥
⎢ .. .. .. .. .. ⎥ ⎣ .. ⎦ ⎣ ... ⎥


⎣. . . . . ⎦
(1) (1) (1) (1)
0 Kn2 Kn3 · · · Knn xn fn

where

j = 2, . . . , n ⎪
(1) (1)
Kij = Kij − Ki1 K1j , i = 2, . . . , n , ⎬
(5)
(1) (1) ⎪

fi = fi − Ki1 f1 , i = 2, . . . , n

(1)
Next, the 2nd equation is divided with K22 , so the coefficient in the 2nd component in the main diagonal
(1)
becomes equal to 1. In turn, the resulting 2nd equation is multiplied with Ki2 , i = 3, . . . , n, and the
resulting equation is withdrawn from the ith equation. This will produce zeros in the ith row of the 2nd
column below the main diagonal, corresponding to the system of equations

⎡ (1) (1) (1) ⎤ ⎡ ⎡ (1) ⎤ ⎤


1 K12 K13 · · · K1n f1 x1
⎢ (2) ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢0 1 K23
(2)
· · · K2n ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ x2 ⎥ ⎢f2 ⎥
⎢ (2) (2) ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢0 0 K33 · · · K3n ⎥ ⎢ x3 ⎥ = ⎢f3 ⎥ (6)
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎣. . . . . ⎦⎣ . ⎦ ⎣ . ⎦
(2) (2) (2)
0 0 Kn3 · · · Knn xn fn

where

(1) ⎫
K2j ⎪

(2)
K2j = , j = 3, . . . , n ⎪

(1) ⎪

K22 ⎪





(1) ⎪

(2) f2 ⎬
f2 = (1) (7)
K22 ⎪






j = 3, . . . , n ⎪
(2) (1) (1) (2)
Kij = Kij − Ki2 K2j , i = 3, . . . , n , ⎪





(2) (1) (1) (2) ⎭
fi = fi − Ki2 f2 , i = 3, . . . , n

The process of producing ones in the main diagonal, and zeros below the main diagonal is continued for
all n columns resulting in the following system of linear equations
184 Chapter A – Solutions to Exercises

⎡ (1) (1) (1)


⎤⎡ ⎤ ⎡ (1) ⎤
1 K12 K13 · · · K1n x1 f1
⎢ (2) ⎥ ⎢ ⎥ ⎢ ⎥
⎢0 (2)
· · · K2n ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢ 1 K23 ⎥ ⎢ x2 ⎥ ⎢ f2 ⎥
⎢ (3) ⎥ ⎢ ⎥ ⎢ (3) ⎥
⎢0 0 1 · · · K3n ⎥ ⎢ x3 ⎥ = ⎢ f3 ⎥ (8)
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎣. . . . . ⎦ ⎣ . ⎦ ⎣ . ⎦
(n)
0 0 0 ··· 1 xn fn

Next, (1) are solved simultaneous with n righthand sides, where the loads form the columns in a unit
matrix. The n solution vectors X = [x1 x2 x3 · · · xn ] are organized in the matrix equationi.e.

KX = I ⇒
⎡ ⎤⎡ ⎤ ⎡ ⎤
K11 K12 K13 · · · K1n x11 x12 x13 · · · x1n 1 0 0 ··· 0
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K21 K22 K23 · · · K2n ⎥ ⎢ x21 x22 x23 · · · x2n ⎥ ⎢0 1 0 ··· 0⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ K31 K32 K33 · · · K3n ⎥ ⎢ x33 · · · x3n ⎥ ⎢ 0 1 ··· 0⎥
⎢ ⎥ ⎢ x31 x32 ⎥ = ⎢0 ⎥ (9)
⎢ .. .. .. .. .. ⎥ ⎢ .. .. .. .. .. ⎥ ⎢ .. .. .. . . .. ⎥
⎣ . . . . . ⎦⎣ . . . . . ⎦ ⎣. . . . .⎦
Kn1 Kn2 Kn3 · · · Knn xn1 xn2 xn3 · · · xnn 0 0 0 ··· 1

Following the steps (2)-(8), simultaneous Gauss elimination of the coefficient matrix and the n righthand
sides provides the following equivalent matrix equation

⎡ (1) (1) (1)


⎤⎡ ⎤ ⎡ (1) ⎤
1 K12 K13 · · · K1n x11 x12 x13 · · · x1n f11 0 0 ··· 0
⎢ (2) ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢0 (2)
· · · K2n ⎥ ⎢ ⎥ ⎢ (2)
0 ⎥
⎢ 1 K23 ⎥ ⎢ x21 x22 x23 · · · x2n ⎥ ⎢ f21 f22 0 ··· ⎥
⎢ (3) ⎥ ⎢ ⎥ ⎢ (3) (3) (3) ⎥
⎢0 0 1 · · · K3n ⎥ ⎢ x31 x32 x33 · · · x3n ⎥ = ⎢ f31 f32 f33 · · · 0 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. .. ⎥ ⎢ .. .. .. .. .. ⎥ ⎢ .. .. .. .. .. ⎥
⎣. . . . . ⎦ ⎣ . . . . . ⎦ ⎣ . . . . . ⎦
(n) (n) (n) (n)
0 0 0 ··· 1 xn1 xn2 xn3 · · · xnn fn1 fn2 fn3 · · · fnn
(10)

As indicated the identity matrix on the righthand side is transformed into a lower triangular matrix F.

In the program the triangulation of the matrix K and the calculation of the matrix F is performed in a
matrix A of the dimension n × 2n, which at the entry of the triangulation loop has the form


A = KI (11)

At exit from the triangulation loop the matrix A stores the triangulized stiffness at the position originally
occupied by K, and the matrix F at the position occupied by the unit matrix.
A.6 Exercise 4.1 185

Calculation of L, D and (S −1)T


Using the Gauss factorization of the stiffness matrix (9) may be written, cf. (3-1)

KX = LDLT X = I ⇒
(12)
LT X = ID−1 L−1 = D−1 L−1 = F

Upon comparison of (10) and (12) it becomes clear that LT is stored as the coefficient matrix in (10),
whereas the righthand sides store the matrix F = D−1 L−1 . Since, L−1 is a lower triangular matrix with
ones in the main diagonal, the main diagonal must contain the main diagonal of D−1 . Hence,

⎡ ⎤
⎡ (1) ⎤ 1
0 0 ··· 0
f11 0 0 ··· 0 ⎢
(1)
f11 ⎥
⎢ ⎥ ⎢ 1
··· 0 ⎥
⎢ 0 f22
(2)
0 ··· 0 ⎥ ⎢ 0 (2) 0 ⎥
⎢ ⎥ ⎢ f22 ⎥
⎢ ⎥ ⎢ ⎥
D−1 =⎢ 0 (3)
0 f33 ··· 0 ⎥ ⇒ D=⎢ 0 0 1
(3) ··· 0 ⎥ (13)
⎢ ⎥ ⎢ f33 ⎥
⎢ .. .. .. .. .. ⎥ ⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦ ⎢ . . . . . ⎥
(n)
⎣ ⎦
0 0 0 ··· fnn 0 0 0 ··· 1
(n)
fnn

Finally, cf. (3-49)


1 1 1 1
S = LD 2 ⇒ S−1 = D− 2 L−1 = D 2 D−1 L−1 = D 2 F (14)

The matrices D and (S−1 )T are retrieved from the righthand sides of (10) as stored in the matrix F
according to the indicated relations at the end of the program.

A.6 Exercise 4.1


Given the following mass- and stiffness matrices

⎡ ⎤ ⎡ ⎤
0 0 0 6 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 1⎦ , K = ⎣−1 4 −1⎦ (1)
0 1 1 0 −1 2

1. Perform a static condensation by the conventional procedure based on (4-5), (4-6), and next by
Rayleigh-Ritz analysis with the Ritz basis given by (4-62).

SOLUTIONS:

Question 1:

The 1st and 3rd row, and next the 1st and 3rd column of the are interchanged, which brings the matrices
of the general eigenvalue problem on the following form, cf. (4-1), (4-2)
186 Chapter A – Solutions to Exercises

⎡ ⎤ ⎫
⎡ ⎤ 1 1 0 ⎪
M11 M12 ⎪

⎢ ⎥ ⎪

M=⎣ ⎦ = ⎢1 2 0⎥ ⎪

⎣ ⎦ ⎪

M21 M22 ⎪

0 0 0 ⎪

⎡ ⎤⎪ (2)
⎡ ⎤ 2 −1 0 ⎪⎪

K11 K12 ⎢ ⎥⎪⎪
K=⎣ ⎦ = ⎢−1 4 −1⎥ ⎪

⎣ ⎦⎪⎪

K21 K22 ⎪

0 −1 6

Notice that the interchange of two rows or two columns may change the sign, but not the numerical value,
of the characteristic polynomial. However, since the characteristic polynomial is zero at the eigenvalue
the determination of the eigenvalues is unaffected by the sign change.

The reduced stiffness matrix (4-7) becomes

     
2 −1 0   2 −1
K̃11 = K11 − K12 K−1
22 K21 = − [6]−1 0 −1 = (3)
−1 4 −1 −1 23
6

The reduced eigenvalue problem (4-6) is solved

   
2 −1 1 1
Φ11 = Φ11 Λ1 (4)
−1 23
6 1 2

The eigensolutions with eigenmodes normalized to modal mass 1 with respect to M11 becomes

     
λ1 0 0.7325 0 (1) (2) 0.5320 1.3103
Λ1 = = , Φ11 = Φ1 Φ1 = (5)
0 λ2 0 9.1008 0.3892 −0.9212

From (4-5) follows

 
(1) (2)   0.5320 1.3103  
−1
Φ21 = Φ2 Φ2 = − [6] 0 −1 = 0.0649 −0.1535 (6)
0.3892 −0.9212

From (4-10) and (4-11) follows

 
0
Λ2 = [λ3 ] = [∞] , Φ12 = , Φ22 = [1] (7)
0
A.6 Exercise 4.1 187

After interchanging the degrees of freedom back to the original order (the 1st components of Φ11 and
Φ12 are placed as the 3rd component of Φ(j) , and the components of Φ21 and Φ22 are placed as the 1st
component Φ(j) , the following eigensolution is obtained

⎡ ⎤ ⎡ ⎤ ⎫
λ1 0 0 0.7325 0 0 ⎪

⎢ ⎥ ⎢ ⎥ ⎪

Λ = ⎣ 0 λ2 0 ⎦ = ⎣ 0 9.1008 0 ⎦ ⎪





0 0 λ3 0 0 ∞ ⎬
⎡ ⎤⎪ (8)
  0.0649 −0.1535 1 ⎪⎪

⎢ ⎥⎪

Φ = Φ(1) Φ(2) Φ(3) = ⎣0.3892 −0.9212 0⎦ ⎪




0.5320 1.3103 0

Next, the same problem is solved by means of Rayleigh-Ritz analysis. The Ritz basis is constructed from
(4-62)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 −1 0 1 0 0.5750 0.1500
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Ψ2 = ⎢ −1 4 −1⎥ ⎢0 1⎥ = ⎢0.1500 0.3000⎥ (9)
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 −1 6 0 0 0.0250 0.0500

The projected mass and stiffness matrices become, cf. (4-63), (4-64)

⎡ ⎤T ⎡ ⎫
⎤⎡ ⎤

0.5750 0.1500 0.5750 0.1500   ⎪

⎢ ⎥ ⎢ 1 1 0 ⎢ ⎥ ⎪

⎥ 0.548125 0.371250 ⎪
M̃ = ⎢

0.1500 0.3000⎥ ⎣1 2 0⎦ ⎢0.1500 0.3000⎥ =
⎦ ⎣ ⎦



0.371250 0.292500 ⎪

0 0 0 ⎪

0.0250 0.0500 0.0250 0.0500 ⎬
(10)
⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪

0.5750 0.1500 −1 0.5750 0.1500  ⎪


⎢ ⎥ ⎢ 2 0 ⎢ ⎥ ⎪

⎢ 0.1500 0.3000 ⎥ ⎥ ⎢0.1500 0.3000 ⎥ 0.5750 0.1500 ⎪

K̃ = ⎣ ⎦ ⎣−1 4 −1⎦ ⎣ ⎦ = ⎪
0.1500 0.3000 ⎪⎪


0.0250 0.0500 0 −1 6 0.0250 0.0500

The eigensolutions to the eigenvalue problem defined by M̃ and K̃ with modal masses normalized to 1
with respect to M̃ become, cf. Box. 4.2

     
ρ1 0 0.7325 0   0.6748 3.5418
R= = , Q = q(1) q(2) = (11)
0 ρ2 0 9.1008 0.9599 −4.8415
188 Chapter A – Solutions to Exercises

The solutions for the eigenvectors become, cf. (4-51)

⎡ ⎤ ⎡ ⎤
0.5750 0.1500  
  ⎢ ⎥ 0.5320 1.3103
0.1500 0.3000⎥ 0.6748 3.5418 ⎢ ⎥
Φ̄ = Φ̄(1) Φ̄(2) = ⎢
⎣ ⎦ 0.9599 −4.8415 = ⎣0.3892 −0.9212⎦ (12)
0.0250 0.0500 0.0649 −0.1535

As seen the eigenvalues (11) are identical to the lowest two eigenvalues from the static condensation
procedure (8). The two lowest eigenmodes in (8) are retrieved from (12) upon interchanging the 1st and
3rd components in the latter.

A.7 Exercise 4.2


Given the following mass- and stiffness matrices

⎡ ⎤ ⎡ ⎤
2 0 0 6 −1 0
⎢ ⎥ ⎢ ⎥
M = ⎣0 2 1⎦ , K = ⎣−1 4 −1⎦ (1)
0 1 1 0 −1 2

1. Calculate approximate eigenvalues and eigenmodes by Rayleigh-Ritz analysis using the following
Ritz basis
⎡ ⎤
1 1
⎢ ⎥
Ψ = [Ψ(1) Ψ(2) ] = ⎣1 −1⎦
1 1

SOLUTIONS:

Question 1:

The projected mass and stiffness matrices become, cf. (4-45)

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎫
  ⎪

1 1 2 0 0 1 1 ⎪

⎢ ⎥ ⎢ ⎥⎢ ⎥ 7 1 ⎪

M̃ = ⎣1 −1⎦ ⎣0 2 1⎦ ⎣1 −1⎦ = ⎪

1 3 ⎪

1 1 0 1 1 1 1 ⎪

(2)
⎡ ⎤T ⎡ ⎤⎡ ⎪ ⎤
1 1 6 −1 0 1 1  ⎪


⎢ ⎥ ⎢ ⎥⎢ ⎥ 8 4 ⎪⎪


K̃ = ⎣1 −1⎦ ⎣−1 4 −1⎦ ⎣1 −1⎦ = ⎪
4 16 ⎪


1 1 0 −1 2 1 1

The eigensolutions to the eigenvalue problem defined by M̃ and K̃ with modal masses normalized to 1
with respect to M̃ become, cf. Box. 4.2
A.8 Exercise 4.3 189

     
ρ1 0 1.0459 0   −0.3864 0.0269
R= = , Q = q(1) q(2) = (3)
0 ρ2 0 5.3541 0.0887 −0.5849

The solutions for the eigenvectors become, cf. (4-51)

⎡ ⎤ ⎡ ⎤
  −0.2976 −0.5580
  1 1
⎢ ⎥ −0.3864 0.0269 ⎢ ⎥
Φ̄ = Φ̄(1) Φ̄(2) = ⎣1 −1⎦ = ⎣−0.4751 0.6118⎦ (4)
0.0887 −0.5849
1 1 −0.2976 −0.5580

The exact eigensolutions can be shown to be

⎡ ⎤ ⎡ ⎤ ⎫
λ1 0 0 0.7245 0 0 ⎪

⎢ ⎥ ⎢ ⎥ ⎪

Λ = ⎣ 0 λ2 0 ⎦ = ⎣ 0 0 ⎦ ⎪

2.9652 ⎪



0 0 λ3 0 0 9.3104 ⎬
⎡ ⎤⎪ (5)
  −0.0853 −0.6981 −0.0458 ⎪⎪

⎢ ⎥⎪

Φ= Φ Φ Φ 0.5778⎦ ⎪
(1) (2) (3)
= ⎣−0.3884 −0.0486 ⎪



−0.5251 0.1997 −0.8149

As seen ρ1 and ρ2 are upperbounds to the exact eigenvalues λ1 and λ2 , and ρ2 is smaller than λ3 , cf.
(4-57). The estimates of the eigenmodes are not useful. Not even the signs of the components of Φ̄(2)
are correctly represented. These poor results are obtained because the chosen Ritz basis is far away from
the basis spanned by Φ(1) and Φ(2) .

A.8 Exercise 4.3


Consider the mass- and stiffness matrices in Exercise 4.2, and let

⎡ ⎤
1
⎢ ⎥
v = ⎣1⎦ (1)
1

 
1. Calculate the vector Φ̄(1) = K−1 Mv, and next λ̄1 = ρ Φ̄(1) , as approximate solutions to the
lowest eigenmode and eigenvalue.
2. Establish the error bound for the obtained approximation to the lowest eigenvalue.
190 Chapter A – Solutions to Exercises

SOLUTIONS:

Question 1:

From the given formula we calculate

⎡ ⎤−1 ⎡ ⎤⎡ ⎤ ⎡ ⎤
6 −1 0 2 0 0 1 0.55
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
Φ̄(1) = ⎣−1 4 −1⎦ ⎣0 2 1⎦ ⎣1⎦ = ⎣1.30⎦ (2)
0 −1 2 0 1 1 1 1.65

The Rayleigh quotient based on Φ̄(1) becomes, cf. (4-25)

⎡ ⎤T ⎡ ⎤⎡ ⎤
0.55 6 −1 0 0.55
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣1.30⎦ ⎣−1 4 −1⎦ ⎣1.30⎦
  1.65 0 −1 2 1.65
λ̄1 = ρ Φ̄(1) = ⎡ ⎤T ⎡ ⎤⎡ ⎤ = 0.7547 (3)
0.55 2 0 0 0.55
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣1.30⎦ ⎣0 2 1⎦ ⎣1.30⎦
1.65 0 1 1 1.65

The obtained un-normalized eigenmode Φ̄(1) resembles Φ(1) much better than the corresponding ap-
proximation for Φ̄(1) indicated in eq. (4) of Exercise 7.2. As a consequence the obtained eigenvalue λ̄1
is a much better approximation to the exact eigenvalue λ1 = 0.7245 given in eq. (5) of Exercise 4.2,
than the approximation ρ1 = 1.0459 obtained by the Rayleigh-Ritz analysis. The indicated formula for
obtaining Φ̄(1) represents the 1st iteration step in the socalled inverse vector iteration algorithm described
in Section 5.2

Question 2:

From (2) follows that

+ (1) +
+Φ̄ + = 2.1714 (4)

The error vector becomes, cf. (4-79)

⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤ ⎡ ⎤
6 −1 0 2 0 0 0.55 1.1698
⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥
ε1 = ⎝⎣−1 4 −1⎦ − 0.7547 · ⎣0 2 1⎦⎠ ⎣1.30⎦ = ⎣−0.2075⎦ ⇒
0 −1 2 0 1 1 1.65 −0.2264
+ +
+ε1 + = 1.2095 (5)
A.9 Exercise 5.1 191

The lowest eigenvalue of M can be shown to be

µ1 = 0.3820 (6)

Then, from (4-85) the following bound is obtained

1 1.2095
|λ1 − λ̄1 | ≤ · = 2.1714 (7)
0.3820 2.1714

Actually, |λ1 − λ̄1 | = |0.7245 − 0.7547| = 0.0302. Hence, the bounding method provides a rather crude
upperbound in the present case.

A.9 Exercise 5.1


Given the following mass- and stiffness matrices defined in Exercise 4.2.

1. Perform two inverse iterations, and then calculate an approximation to λ1 .


2. Perform two forward iterations, and then calculate an approximation to λ3 .

SOLUTIONS:

Question 1:

The calculations are performed with the start vector

⎡ ⎤
1
⎢ ⎥
Φ0 = ⎣1⎦ (1)
1

The matrix A becomes, cf. (5-4)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
6 −1 0 2 0 0 0.350 0.125 0.075
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
A = ⎣−1 4 −1⎦ ⎣0 2 1⎦ = ⎣0.100 0.750 0.450⎦ (2)
0 −1 2 0 1 1 0.050 0.875 0.725

At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 5.1
192 Chapter A – Solutions to Exercises

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 0.350 0.125 0.075 1 0.55

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄1 = ⎣0.100 0.750 0.450⎦ ⎣1⎦ = ⎣1.30⎦ ⇒ Φ̄T1 MΦ̄1 = 10.9975




⎨ 0.050 0.875 0.725 1 1.65
⎡ ⎤ ⎡ ⎤ (3)



⎪ 0.55 0.16585

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √ ⎣1.30⎦ = ⎣0.39201⎦


⎩ 10.9975
1.65 0.49755

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 0.350 0.125 0.075 0.16585 0.14436

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎪ Φ̄T2 MΦ̄2 = 1.8812
⎪ Φ̄2 = ⎣0.100 0.750 0.450⎦ ⎣0.39201⎦ = ⎣0.53449⎦





⎨ 0.050 0.875 0.725 0.49755 0.71202
⎡ ⎤ ⎡ ⎤ (4)



⎪ 0.14436 0.10526

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ2 = √ ⎣0.53449⎦ = ⎣0.38970⎦


⎩ 1.8812
0.71202 0.51914

Since, Φ2 has been normalized to unit modal mass, so ΦT2 MΦ2 = 1, an approximation is obtained from
the following Rayleigh fraction, cf. (4-25)

⎡ ⎤T ⎡ ⎤⎡ ⎤
0.10526 6 −1 0 0.10526
⎢ ⎥ ⎢ ⎥⎢ ⎥
λ̄1 = ΦT2 KΦ2 = ⎣0.38970⎦ ⎣−1 4 −1⎦ ⎣0.38970⎦ = 0.72629 (5)
0.51914 0 −1 2 0.51914

The exact solution is λ1 = 0.72446.

Question 2:

The calculations are performed with the start vector given by (1).

The matrix B becomes, cf. (5-35)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
2 0 0 6 −1 0 3.0 −0.5 0.0
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
B = ⎣0 2 1⎦ ⎣−1 4 −1⎦ = ⎣−1.0 5.0 −3.0⎦ (6)
0 1 1 0 −1 2 1.0 −6.0 5.0

At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 5.3
A.10 Exercise 5.2 193

⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 3.0 −0.5 0.0 1 2.5

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄ = ⎣−1.0 5.0 −3.0⎦ ⎣1⎦ = ⎣1.0⎦ ⇒ Φ̄T1 MΦ̄1 = 14.5

⎪ 1


⎨ 1.0 −6.0 5.0 1 0.0
⎡ ⎤ ⎡ ⎤ (7)



⎪ 2.5 0.65653

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √ ⎣1.0⎦ = ⎣0.26261⎦


⎩ 14.5
0.0 0.00000
⎧ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎪ 3.0 −0.5 0.0 0.65653 1.83829

⎪ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎪ Φ̄2 = ⎣−1.0 5.0 −3.0⎦ ⎣0.26261⎦ = ⎣ 0.65653⎦ ⇒ Φ̄T1 MΦ̄1 = 7.25862




⎨ 1.0 −6.0 5.0 0.00000 −0.91915
⎡ ⎤ ⎡ ⎤ (8)



⎪ 1.83829 0.68232

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = √7.25862 ⎣ 0.65653⎦ = ⎣ 0.24369⎦



−0.91915 −0.34116
Again, Φ2 has been normalized to unit modal mass, so ΦT2 MΦ2 = 1, an approximation is obtained
from the following Rayleigh fraction

⎡ ⎤T ⎡ ⎤⎡ ⎤
0.68232 6 −1 0 0.68232
⎢ ⎥ ⎢ ⎥⎢ ⎥
λ̄3 = ΦT2 KΦ2 = ⎣ 0.24369⎦ ⎣−1 4 −1⎦ ⎣ 0.24369⎦ = 3.09739 (9)
−0.34116 0 −1 2 −0.34116

The exact solution is λ3 = 9.31036. The poor result is obtained because Φ2 is a rather bad approximation
to Φ(3) .

A.10 Exercise 5.2


Given the following mass- and stiffness matrices
⎡ ⎤ ⎡ ⎤
1
0 0 2 −1 0
⎢ 2
⎥ ⎢ ⎥
M = ⎣0 1 0⎦ , K = ⎣−1 4 −1⎦ (1)
0 0 12 0 −1 2

The eigenmodes Φ(1) are Φ(3) are known to be, cf. (1-87)
⎡√ ⎤ ⎡ √ ⎤
2 2
⎢ √2 ⎥ ⎢ √2 ⎥
Φ(1) = ⎢ 2⎥ , Φ(3) = ⎢− 2 ⎥ (2)
⎣ √2 ⎦ ⎣ √2 ⎦
2 2
2 2
194 Chapter A – Solutions to Exercises

1. Calculate Φ(2) by means of Gram-Schmidt orthogonalization, and calculate all eigenvalues.

SOLUTIONS:

Question 1:

Consider an arbitrary vector

⎡ ⎤
1
⎢ ⎥
x = ⎣2 ⎦ (3)
2

Since, Φ(1) , Φ(2) and Φ(3) form a vector basis, we may write

x = c1 Φ(1) + c2 Φ(2) + c3 Φ(3) (4)


In order to determine the expansion coefficient cj , (4) is premultiplied with Φ(j) T M, and the M-
othonormality of the eigenmodes are used, i.e. that Φ(i) T MΦ(j) = δij . For j = 1, 3 the following
results are obtained


⎡ √ ⎤T ⎡ ⎤⎡ ⎤
2 ⎪
⎪ 1
0 0 1


⎢ √2 ⎥ ⎢ 2 √


⎢ 2 ⎥ ⎣0 ⎥⎢ ⎥
⎣ √2 ⎦ ⎪
⎪ 1 0 ⎦ ⎣2⎦ = 7 2
, j=1


4
2 ⎪
⎨ 0 0 1
2
(j) T 2 2
cj = Φ Mx = ⎡ √ ⎤ (5)
⎪ T ⎡ ⎤⎡ ⎤

⎪ 2 1

⎪ ⎢ 2 ⎥ 2 0 0 1


⎪ ⎢

2⎥ ⎢ ⎥⎢ ⎥
⎪ 1 0 ⎦ ⎣2⎦ = − 42


⎪ ⎣− √2 ⎦ ⎣ 0 , j=3
⎩ 2 1
2
0 0 2 2
Then, from (3), (4) and (5) follows

⎡ ⎤ ⎡√ ⎤ ⎡ √ ⎤ ⎡ ⎤
1 √ 2
√ 2
−0.5
⎢ ⎥ 7 2 ⎢ ⎥ 2 ⎢ ⎥ ⎢
2
√ √2

c2 Φ(2) = ⎣2⎦ − ·⎢
⎣ 2
2⎥ +
⎦ ·⎢
⎣ − 22 ⎥
⎦ = ⎣ 0.0⎦ ⇒
4 √ 4 √
2 2
2 2 2
0.5

⎡ ⎤T ⎡ 1
⎤⎡ ⎤
−0.52 −0.5
0 0
⎢ ⎥ ⎢ ⎥⎢ ⎥
c22 = ⎣ 0.0⎦ ⎣ 0 1 0 ⎦ ⎣ 0.0⎦ = 0.25 ⇒
1
0.5 0 0 2 0.5
⎡ ⎤ ⎡ ⎤
−0.5 −1
1 ⎢ ⎥ ⎢ ⎥
Φ(2) = · ⎣ 0.0⎦ = ⎣ 0⎦ (6)
0.5
0.5 1
A.11 Exercise 6.3 195

Hence, the modal matrix becomes, cf. (1-87)

⎡√ √ ⎤
 
2
−1 2
⎢ √2 √2 ⎥
Φ = Φ(1) Φ(2) Φ(3) = ⎢ 2 0 − 22 ⎥ (7)
⎣ √2 √ ⎦
2 2
2 1 2

Given that all eigenmodes have been normalized to unit modal mass the eigenvalues may be calculated
from the Rayleigh quotient, cf. (1-21), (4-25)

⎡ ⎤
2 0 0
⎢ ⎥
Λ = ΦT KΦ = ⎣0 4 0⎦ (8)
0 0 6
Generally, if n − 1 eigenmodes to a general eigenvalue problem is known the remaining eigenmode can
lways be determined solely from the M-orthonormality conditions.

A.11 Exercise 6.3


Given the mass- and stiffness matrices defined in Exercise 4.2.

1. Perform an initial transformation to a special eigenvalue problem, and calculate the eigenvalues and
eigenvectors by means of standard Jacobi iteration.
2. Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi iteration oper-
ating on the original general eigenvalue problem.

SOLUTIONS:

Question 1:

Initially, a Choleski decomposition of the mass matrix is performed, cf. (3-44). As indicated by the
algorithm in Box 3.2 the following calculations are performed

√ √ ⎫
s11 = m11 = 2 ⎪





m21 0
= √ =0 ⎪

s21 = ⎪

s11 2 ⎪



m31 0 ⎪

s31 = = √ =0 ⎪



s11 2 ⎬
( √ (1)
s22 = m22 − s221 = 2 − 02 = 2 ⎪

√ ⎪



1 1 2 ⎪

s32 = (m32 − s31 · s21 ) = √ (1 − 0 · 0) = ⎪

s22 2 ⎪


2 ⎪

√ √ ⎪

( 2
2 ⎪

2 ⎪

s33 = m33 − s32 − s31 = 1 −
2 2 − 02 = ⎭
2 2
196 Chapter A – Solutions to Exercises

Hence, the matrices S and S−1 become

⎡√ ⎤
2 0 0
⎢ √ ⎥
S=⎣ 0 2 0⎦ ⇒
√ √
2 2
0 2 2
⎡√ ⎤
2
0 0
⎢ 2 √ ⎥
S−1 = ⎢
⎣0
2
0 ⎥
⎦ (2)
√2 √
0 − 22 2

The initial value of the updated similarity transformation matrix and the stiffness matrix becomes, cf.
(3-48), (6-4)

⎡√ ⎤ ⎫
2
0 0 ⎪

⎢ 2 √ √ ⎥ ⎪

Φ0 = (S−1 )T = ⎢ 2
− 22 ⎥ ⎪

⎣0 ⎪
2
√ ⎦ ⎪


0 0 2 ⎪




K0 = K̃ = S−1 K(S−1 )T = ⎪
(3)
⎡√ ⎤⎡ ⎪
⎤ ⎡ √2 ⎤ ⎡ ⎤⎪⎪

−1 0.5 ⎪
2
⎢ 2
0 0
⎥⎢
6 0 0 0 3.0 −0.5 ⎪

⎥⎢ √ ⎥
2
⎢0

⎥ ⎣−1 ⎢0

2⎥ = ⎢ ⎥ ⎪


2
0 ⎦ 4 −1 ⎦ ⎣
2
− 2 ⎦ ⎣−0.5 2.0 −3.0⎦ ⎪⎪

√2 √ 2
√ ⎪

0 − 2 2
2 0 −1 2 0 0 2 0.5 −3.0 8.0

In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
⎧   

⎪ 1 2 · (−0.5) cos θ = 0.9239

⎪ θ = arctan = −0.3927 ⇒

⎪ − sin θ = −0.3827


2 3.0 2.0



⎪ ⎡ ⎤



⎪ 0.9239 0.3827 0

⎪ ⎢ ⎥

⎪ P0 = ⎣−0.3827 0.9239 0⎦





⎪ 0 0 1

⎡ ⎤ (4)

⎪ 0.6533 0.2706 0

⎪ ⎢ ⎥

⎪ Φ1 = Φ0 P0 = ⎣−0.2706 0.6533 −0.7071⎦





⎪ 0 0 1.4142



⎪ ⎡ ⎤



⎪ 3.2071 0 1.6070

⎪ ⎢ ⎥

⎪ K1 = PT0 K0 P0 = ⎣ 0 1.7929 −2.5803⎦


⎩ 1.6070 −2.5803 8.0000
A.11 Exercise 6.3 197

Next, the calculations are performed for (i, j) = (1, 3) :


⎧   

⎪ 1 2 · 1.6070 cos θ = 0.9566

⎪ θ = arctan = −0.2958 ⇒

⎪ 3.2071 − 8.0000 sin θ = −0.2915


2



⎪ ⎡ ⎤



⎪ 0.9566 0 0.2915

⎪ ⎢ ⎥

⎪ P1 = ⎣ 0 1 0 ⎦





⎪ −0.2915 0 0.9566

⎡ ⎤ (5)

⎪ 0.6249 0.2706 0.1904

⎪ ⎢ ⎥

⎪ Φ2 = Φ1 P1 = ⎣−0.0527 0.6533 −0.7553⎦





⎪ −0.4122 0 1.3528



⎪ ⎡ ⎤



⎪ 2.7165 0.7521 0

⎪ ⎢ ⎥
⎪ T
⎪ K2 = P1 K1 P1 = ⎣0.7521 1.7929 −2.4682⎦


⎩ 0 −2.4682 8.4906
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
⎧   

⎪ 1 2 · (−2.4682) cos θ = 0.9500

⎪ θ = arctan = 0.3176 ⇒

⎪ 1.7929 − 8.4906


2 sin θ = 0.3123



⎪ ⎡ ⎤



⎪ 1 0 0

⎪ ⎢ ⎥

⎪ P2 = ⎣0 0.9500 −0.3123⎦





⎪ 0 0.3123 0.9500

⎡ ⎤ (6)

⎪ 0.6249 0.3165 0.0964

⎪ ⎢ ⎥

⎪ Φ3 = Φ2 P2 = ⎣−0.0527 0.3848 −0.9215⎦





⎪ −0.4122 0.4224 1.2852



⎪ ⎡ ⎤



⎪ 2.7165 0.7145 −0.2349

⎪ ⎢ ⎥
⎪ T
⎪ K3 = P2 K2 P2 = ⎣ 0.7145 0.9816 0 ⎦


⎩ −0.2349 0 9.3019
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the
eigenvalues
⎧ ⎡ ⎤ ⎡ ⎤

⎪ 0.6980 0.0862 0.0729 2.9652 0.0028 0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ Φ6 = ⎣ 0.0481 0.3885 −0.9202⎦ , K6 = ⎣0.0028 0.7245 0 ⎦




⎨ −0.2004 0.5249 1.2978 0.0000 0 9.3104
⎡ ⎤ ⎡ ⎤ (7)



⎪ 0.6981 0.0853 0.0729 2.9652 0.0000 0.0000

⎪ ⎢ ⎥ ⎢ ⎥

⎪ Φ9 = ⎣ 0.0486 0.3884 −0.9202⎦ , K9 = ⎣0.0000 0.7245 0 ⎦



−0.1997 0.5251 1.2978 0.0000 0 9.3104
As seen the eigenmodes are stored column-wise in Φ according to the permutation (j1 , j2 , j3 ) = (2, 1, 3),
cf. Box 6.2.
198 Chapter A – Solutions to Exercises

Question 2:

The following initializations are introduced, cf. Box 6.4


⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2 0 0 6 −1 0 1 0 0
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
M0 = M = ⎣0 2 1⎦ , K0 = K = ⎣−1 4 −1⎦ , Φ0 = ⎣0 1 0⎦ (8)
0 1 1 0 −1 2 0 0 1

In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
⎧ ⎫

⎪ 4 · 0 − 2 · (−1) ⎪ 

⎪ a= = 0.5⎪⎬

⎪ 6 · 2 − 2 · 4 α = −0.4142

⎪ ⇒

⎪ 6 · 0 − 2 · (−1) ⎪
= 0.5⎪
⎪ ⎭ β = 0.4142

⎪ b=

⎪ 6 · 2 − 2 · 4



⎪ ⎡ ⎤ ⎡ ⎤




1 0.4142 0 1 0.4142 0

⎪ ⎢ ⎥ ⎢ ⎥

⎪ P0 = ⎣−0.4142 1 0⎦ , Φ1 = Φ0 P0 = ⎣−0.4142 1 0⎦


⎨ 0 0 1 0 0 1
(9)

⎪ ⎡ ⎤

⎪ −0.4142


2.3431 0

⎪ T ⎢ ⎥
⎪ M1 = P0 M0 P0 = ⎣
⎪ 0 2.3431 1 ⎦



⎪ −0.4142 1 1





⎪ ⎡ ⎤

⎪ 7.5147 0 0.4142



⎪ ⎢ ⎥

⎪ K1 = PT0 K0 P0 = ⎣ 0 4.2010 −1 ⎦

⎩ 0.4142 −1 2

Next, the calculations are performed for (i, j) = (1, 3) :


⎧ ⎫
2 · (−0.4142) − 1 · 0.4142


⎪ a= = −0.4393⎪⎪



⎪ 7.5147 · 1 − 2.3431 · 2 α = 1.0023

⎪ ⇒

⎪ 7.5147 · (−0.4142) − 2.3431 · 0.4142 ⎪
⎪ β = −0.3050

⎪ = −1.4437⎭


b=
7.5147 · 1 − 2.3431 · 2





⎪ ⎡ ⎤ ⎡ ⎤



⎪ 1 0 −0.3050 1 0.4142 −0.3050

⎪ ⎢ ⎥ ⎢ ⎥

⎪ P1 = ⎣ 0 1 0 ⎦ , Φ2 = Φ1 P1 = ⎣−0.4142 1 0.1263⎦


⎨ 1.0023 0 1 1.0023 0 1
(10)

⎪ ⎡ ⎤



⎪ 2.5174 1.0023 0

⎪ T ⎢ ⎥

⎪ M2 = P1 M1 P1 = ⎣1.0023 2.3431 1 ⎦





⎪ 0 1 1.4707



⎪ ⎡ ⎤



⎪ 10.3542 −1.0023 0

⎪ ⎢ ⎥


⎪ K2 = PT1 K1 P1 = ⎣−1.0023 4.2010 −1 ⎦

0 −1 2.4465
A.11 Exercise 6.3 199

Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
⎧ ⎫
⎪ 2.4465 · 1 − 1.4707 · (−1) ⎪ 

⎪ a = = 8.7838 ⎪


⎪ 4.2010 · 1.4707 − 2.3407 · 2.4465 α = −1.2369

⎪ ⇒

⎪ 4.2010 · 1 − 2.3431 · (−1) ⎪
= 14.6745⎪
⎪ ⎭ β = 0.7404

⎪ b=

⎪ 4.2010 · 1.4707 − 2.3431 · 2.4465





⎪ ⎡ ⎤ ⎡ ⎤

⎪ 1 0 0 1 0.7915 0.0016



⎪ ⎢ ⎥ ⎢ ⎥

⎪ P2 = ⎣0 1 0.7404⎦ , Φ3 = Φ2 P2 = ⎣−0.4142 0.8437 0.8667⎦

⎨ 0 −1.2369 1 1.0023 −1.2369 1
(11)

⎪ ⎡ ⎤



⎪ 2.5174 1.0023 0.7421

⎪ ⎢ ⎥


⎪ M3 = PT2 M2 P2 = ⎣1.0023 2.1193 0 ⎦



⎪ 0.7421 0 4.2357





⎪ ⎡ ⎤

⎪ 10.3542 −1.0023 −0.7421

⎪ ⎢ ⎥

⎪ K3 = PT2 K2 P2 = ⎣−1.0023 10.4174

⎪ 0 ⎦

−0.7421 0 3.2684

At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the
transformed mass and stiffness matrices

⎧ ⎡ ⎤

⎪ 1.6779 −0.0129 0.1846

⎪ ⎢ ⎥

⎪ Φ6 = ⎣ 0.0350 1.3400 0.8282⎦





⎪ −0.3741 −1.9075 1.1188



⎪ ⎡ ⎤ ⎡ ⎤

⎪ 0.1959 −0.0118 17.0856 −0.1959 0.0118

⎪ 5.7469

⎪ ⎢ ⎥ ⎢ ⎥
⎪ M = K6 = ⎣−0.1959 19.6067
⎪ 6 ⎣ 0.1959 ⎦ 0 ⎦
⎪ 2.1179 0 ,



⎪ −0.0118 0 4.5448 0.0118 0 3.2925

(12)

⎪ ⎡ ⎤



⎪ 1.6780 −0.1060 0.1819

⎪ ⎢ ⎥

⎪ Φ9 = ⎣ 0.1169 1.3379 0.8281⎦



⎪ −0.4801 −1.8869 1.1195





⎪ ⎡ ⎤ ⎡ ⎤

⎪ 17.1296 −0.0000 −0.0000

⎪ 5.7769 0.0000 0.0000

⎪ ⎢ ⎥ ⎢ ⎥
⎪ M9 = ⎣0.0000


2.1139 0 ⎦ , K9 = ⎣−0.0000 19.6810 0 ⎦

0.0000 0 4.5448 −0.0000 0 3.2925

Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes
are next retrieved by the following calculations, cf. Box. 6.4
200 Chapter A – Solutions to Exercises

⎧ ⎡ ⎤ ⎡ ⎤

⎪ 5.7769 0.0000 0.0000 0.4161 0 0

⎪ ⎢ ⎥ −1 ⎢ ⎥

⎪ m = M9 = ⎣0.0000 2.1139 0 ⎦ , m 2 =⎣ 0 0.6878 0 ⎦ ⇒





⎪ 0.0000 0 4.5448 0 0 0.4691



⎪ ⎡ ⎤ ⎡ ⎤

⎪ 2.9652 −0.0000 −0.0000
⎨ λ2 0 0
⎢ ⎥ ⎢ ⎥
Λ = ⎣ 0 λ3 0 ⎦ = M−1 9 K9 = ⎣−0.0000 9.3104 0.0000⎦ (13)



⎪ 0 0 λ1 −0.0000 0.0000 0.7245



⎪ ⎡ ⎤



⎪ 0.6981 −0.0729 0.0853

⎪ ⎢ ⎥


1
Φ = Φ(2) Φ(3) Φ(1) = Φ9 m− 2 = ⎣ 0.0486 0.9202 0.3884⎦



−0.1997 −1.2978 0.5251

The solutions (13) are identical to those obtained in (8) for the special Jacobi iteration algorithm. In the
present case the eigenmodes are stored column-wise in Φ according to the permutation (j1 , j2 , j3 ) =
(2, 3, 1), cf. Box 6.4. The convergence rates of the special nd the general Jacobi iteration algorithm
seems to be rather alike.

A.12 Exercise 6.6


Given the mass- and stiffness matrices defined in Exercise 4.2.

1. Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.

SOLUTIONS:

Question 1:

At first, as indicated in Box 6.7 an initial similarity transformation of the indicated general eigenvalue
problem into a special eigenvalue problem is performed with the similarity transformation matrix P =
 −1 T
S , where S is a solution to M = SST . In case S is determined from an Choleski decomposition of
the mass matrix the initial updated transformation and stiffness matrices have been calculated in Exercise
6.3, eq. (4). The result becomes

⎡√ ⎤ ⎫
2
0 0 ⎪

⎢ 2 √ √ ⎥ ⎪

Φ1 = (S−1 )T = ⎢ 2
− 22 ⎥ ⎪

⎣0 ⎪
2
√ ⎦ ⎪


0 0 2 ⎪




K1 = S−1 K(S−1 )T = ⎪
(1)
⎡√ ⎤⎡ ⎪
⎤ ⎡ √2 ⎤ ⎡ ⎤⎪


2
0 0 6 −1 0 0 0 3.0 −0.5 0.5 ⎪

⎢ 2 ⎥⎢ ⎥⎢ √ ⎥ ⎥⎪
2
⎢0

⎥ ⎣−1 ⎢0

2⎥ = ⎢



2
0 ⎦ 4 −1 ⎦ ⎣
2
− 2 ⎦ ⎣−0.5 2.0 −3.0⎦ ⎪


√2 √ 2
√ ⎪

0 − 2 2
2 0 −1 2 0 0 2 0.5 −3.0 8.0
A.12 Exercise 6.6 201

As seen the original three diagonal structure of K is destroyed by the similarity. This may be reestab-
lished by means of a Householder transformation as described in Section 6.4. However, this will be
omitted here, so the QR-iteration is performed on the full matrix K1 .

At the determination of q1 and r11 in the 1st QR iteration the following calculations are performed, cf.
(6-72)

⎧ ⎡ ⎤ +⎡ ⎤+
⎪ + 3.0 +

⎪ 3.0 + +

⎪ ⎢ ⎥ +⎢ ⎥+

⎪ k1 = ⎣−0.5⎦ , r11 = +⎣−0.5⎦+ = 3.0822

⎪ + +

⎨ 0.5 + 0.5 +
⎡ ⎤ ⎡ ⎤ (2)



⎪ 3.0 0.9733

⎪ 1 ⎢ ⎥ ⎢ ⎥

⎪ q1 = ⎣−0.5⎦ = ⎣−0.1622⎦

⎪ 3.0822

0.5 0.1622

q2 and r12 , r22 are determined from the following calculations, cf. (6-73)

⎧ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤

⎪ −0.5 0.9733 −0.5

⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ k2 = ⎣ 2.0⎦ , r12 = ⎣−0.1622⎦ ⎣ 2.0⎦ = −1.2978





⎪ −3.0 0.1622 −3.0



⎪ +⎡ ⎤ ⎡ ⎤+

⎪ + −0.5

⎨ + 0.9733 ++
+⎢ ⎥ ⎢ ⎥+
r22 = +⎣ 2.0⎦ + 1.2978 · ⎣−0.1622⎦+ = 3.4009 (3)

⎪ + +

⎪ + −3.0 0.1622 +





⎪ ⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤

⎪ −0.5

⎪ 0.9733 0.2244

⎪ 1 ⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥

⎪ q2 = ⎝⎣ 2.0⎦ + 1.2978 · ⎣−0.1622⎦⎠ = ⎣ 0.5262⎦

⎩ 3.4009
−3.0 0.1622 −0.8202

q3 and r13 , r23 , r33 are determined from the following calculations, cf. (6-74)

⎧ ⎡ ⎤

⎪ 0.5

⎪ ⎢ ⎥

⎪ k3 = ⎣−3.0⎦ , r13 = qT1 k3 = 2.2711 , r23 = qT2 k3 = −8.0282





⎪ 8.0



⎨ + +
+ +
r33 = +k3 − 2.2711q1 + 8.0282q2 + = 1.9080 (4)





⎪ ⎡ ⎤




0.0477

⎪ 1 ⎢ ⎥

⎪ q3 = k3 − 2.2711q1 + 8.0282q2 = ⎣0.8348⎦

⎩ 1.9080
0.5486
202 Chapter A – Solutions to Exercises

Then, at the end of the 1st iteration the following matrices are obtained

⎧ ⎡ ⎤⎫

⎪ 0.9733 0.2244 0.0477 ⎪


⎪ ⎢ ⎥⎪


⎪ Q1 = ⎣−0.1622 0.5662 0.8348⎦ ⎪


⎪ ⎪


⎪ ⎪

⎪ 0.1622 −0.8202 0.5486 ⎪ ⎬



⎪ ⎡ ⎤⎪ ⇒

⎪ ⎪

⎪ 3.0822 −1.2978 2.2711 ⎪


⎪ ⎢ ⎥⎪


⎪ R = ⎣ 0 3.4009 −8.0282⎦ ⎪⎪

⎪ 1 ⎪


⎪ ⎭

⎪ 0 0 1.9080

(5)

⎪ ⎡ ⎤ ⎫



⎪ 0.6882 0.1587 0.0337 ⎪


⎪ ⎢ ⎥ ⎪


⎪ Φ2 = Φ1 Q1 = ⎣−0.2294 0.2024⎦ ⎪



0.9521 ⎪


⎪ ⎪


⎪ 0.2294 −1.1600 0.7758 ⎬



⎪ ⎡ ⎤⎪



⎪ 3.5789 −1.8540 0.3095 ⎪⎪


⎪ ⎢ ⎥⎪



⎪ K2 = R1 Q1 = ⎣−1.8540 8.3744 −1.5650⎦ ⎪



⎩ ⎪

0.3095 −1.5650 1.0466

The corresponding matrices after the 2nd and 3rd iteration become

⎧ ⎡ ⎤⎫

⎪ 0.8853 0.4648 0.0115 ⎪


⎪ ⎢ ⎥⎪


⎪ Q2 = ⎣−0.4586 0.8689 0.1861⎦ ⎪


⎪ ⎪


⎪ ⎪

⎪ 0.0766 −0.1700 0.9825 ⎪ ⎬



⎪ ⎡ ⎤⎪ ⇒

⎪ ⎪

⎪ 4.0425 −5.6020 1.0719 ⎪


⎪ ⎢ ⎥⎪


⎪ R = ⎣ 0 6.6809 −1.3940⎦ ⎪⎪

⎪ 2 ⎪


⎪ ⎭

⎪ 0 0 0.7405

(6)

⎪ ⎡ ⎤ ⎫



⎪ 0.5391 0.4521 0.0706 ⎪


⎪ ⎢ ⎥ ⎪


⎪ Φ3 = Φ2 Q2 = ⎣−0.6243 0.3734⎦ ⎪



0.6862 ⎪


⎪ ⎪


⎪ 0.7945 −1.0332 0.5489 ⎬



⎪ ⎡ ⎤⎪



⎪ 6.2303 −3.1708 0.0567 ⎪⎪


⎪ ⎢ ⎥⎪



⎪ K3 = R2 Q2 = ⎣−3.1708 6.0422 −0.1259⎦ ⎪



⎩ ⎪

0.0567 −0.1259 0.7275
A.12 Exercise 6.6 203

⎧ ⎡ ⎤⎫

⎪ 0.8912 0.4536 0.0021 ⎪


⎪ ⎢ ⎥⎪⎪

⎪ Q = ⎣ −0.4536 ⎦ ⎪


⎪ 3 0.8610 0.0219 ⎪


⎪ ⎪


⎪ 0.0081 −0.0205 0.9998 ⎬



⎪ ⎡ ⎤⎪ ⇒




⎪ 6.9910 −5.5673 0.1135 ⎪ ⎪


⎪ ⎢ ⎥⎪⎪


⎪ R = ⎣ 0 3.9475 −0.1014 ⎦ ⎪



3



⎪ 0 0 0.7274


(7)

⎪ ⎡ ⎤ ⎫



⎪ 0.2760 0.6459 0.0816 ⎪


⎪ ⎢ ⎥ ⎪


⎪ Φ = Φ Q = ⎣−0.8645 0.3206 0.3871⎦ ⎪



4 3 3 ⎪


⎪ ⎪


⎪ 1.1811 −0.5714 0.5277 ⎬



⎪ ⎡ ⎤⎪



⎪ 8.5763 −1.7913 0.0059 ⎪⎪


⎪ ⎢ ⎥⎪



⎪ K4 = R3 Q3 = ⎣−1.7913 3.5192 −0.0148⎦ ⎪



⎩ ⎪

0.0059 −0.0148 0.7245

As seen from R3 and K4 the terms in the main diagonal have already after the 3rd iteration grouped
in descending magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in
Box 6.7. Moreover, for both matrices convergence to the lowest eigenvalue λ1 = 0.7245 has occurred,
illustrating the fact that the QR algorithm converge faster to the lowest eigenmode than to the highest.

The matrices after the 14th iteration become

⎧ ⎡ ⎤⎫

⎪ 1.0000 0.0000 0.0000 ⎪ ⎪

⎪ ⎢ ⎥⎪⎪

⎪ Q = ⎣−0.0000 ⎦ ⎪


⎪ 14 1.0000 0.0000 ⎪


⎪ ⎪


⎪ 0.0000 −0.0000 1.0000 ⎬



⎪ ⎡ ⎤⎪ ⇒

⎪ ⎪

⎪ 9.3104 −0.0000 0.0000 ⎪


⎪ ⎢ ⎥ ⎪


⎪ R14 = ⎣ 0 −0.0000 ⎦ ⎪


⎪ 2.9652 ⎪


⎪ ⎭

⎪ 0 0 0.7245

(8)

⎪ ⎡ ⎤ ⎫



⎪ 0.0729 0.6981 0.0853 ⎪


⎪ ⎢ ⎥ ⎪


⎪ Φ = Φ Q = ⎣−0.9202 0.0486 0.3884⎦ ⎪



15 14 14 ⎪


⎪ ⎪


⎪ 1.2978 −0.1997 0.5251 ⎬



⎪ ⎡ ⎤⎪



⎪ 9.3104 −0.0000 0.0000 ⎪⎪


⎪ ⎢ ⎥⎪



⎪ K15 = R14 Q14 = ⎣−0.0000 2.9652 −0.0000⎦ ⎪



⎩ ⎪

0.0000 −0.0000 0.7245
204 Chapter A – Solutions to Exercises

Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for
the eigenvalues and eigenmodes of the original general eigenvalue problem

⎡ ⎤ ⎡ ⎤⎫
λ3 0 0 9.3104 −0.0000 0.0000 ⎪⎪
⎢ ⎥ ⎢ ⎥⎪⎪
Λ = ⎣ 0 λ2 0 ⎦ = K15 = ⎣−0.0000 2.9652 −0.0000⎦ ⎪⎪



0 0 λ1 0.0000 −0.0000 0.7245 ⎪⎪

⎡ ⎤ (9)
0.0729 0.6981 0.0853 ⎪ ⎪

(3) (2) (1) ⎢ ⎪
⎥ ⎪
Φ= Φ Φ Φ = Φ15 = ⎣−0.9202 0.0486 0.3884⎦ ⎪⎪


1.2978 −0.1997 0.5251 ⎪ ⎪

The solution (9) agrees with the corresponding solutions for the special and general Jacobi iteration
algorithms obtained in Exercise 6.3, eq. (8) and (14), respectively.

A.13 Exercise 7.1


Given the mass- and stiffness matrices defined in Exercise 4.2.

1. Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous inverse vec-
tor iteration with the start vector basis
⎡ ⎤
1 1
(1) (2) ⎢ ⎥
Φ0 = Φ0 Φ0 = ⎣1 0⎦
1 −1

SOLUTIONS:

Question 1:

The matrix A becomes, cf. (5-4)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
6 −1 0 2 0 0 0.350 0.125 0.075
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
A = K−1 M = ⎣−1 4 −1⎦ ⎣0 2 1⎦ = ⎣0.100 0.750 0.450⎦ (1)
0 −1 2 0 1 1 0.050 0.875 0.725

Then, the 1st iterated vector basis becomes, cf. (5-4)

⎡ ⎤⎡ ⎤ ⎡ ⎤
0.350 0.125 0.075 1 1 0.550 0.275
(1) (2) ⎢ ⎥⎢ ⎥ ⎢ ⎥
Φ̄1 = Φ̄1 Φ̄1 = AΦ0 = ⎣0.100 0.750 0.450⎦ ⎣1 0⎦ = ⎣1.300 −0.350⎦ (2)
0.050 0.875 0.725 1 −1 1.650 −0.675
A.13 Exercise 7.1 205

(1)
At the determination of Φ1 and r11 in the 1st vector iteration the following calculations are performed,
cf. (7-13)

⎧ ⎛⎡
⎪ ⎡ ⎤ ⎤T ⎡ ⎤⎡ ⎤⎞ 12

⎪ 0.550 , , 0.550 2 0 0 0.550 ⎟

⎪ ⎢ ⎥ , (1) , ⎜ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎪ ,Φ̄1 , = ⎜
⎪ ⎣0 2 1⎦ ⎣1.300⎦⎟
(1)

⎪ Φ̄ 1 = ⎣ 1.300 ⎦ , r11 = ⎝⎣1.300⎦ ⎠ = 3.3162


⎨ 1.650 1.650 0 1 1 1.650
(3)

⎪ ⎡ ⎤ ⎡ ⎤



⎪ 0.550 0.1659

⎪ (1) 1 ⎢ ⎥ ⎢ ⎥

⎪ Φ1 = ⎣1.300⎦ = ⎣0.3920⎦

⎩ 3.3162
1.650 0.4976
(2)
Φ1 and r12 , r22 are determined from the following calculations, cf. (7-15)

⎧ ⎡ ⎤ ⎡ ⎤T ⎡ ⎤⎡ ⎤

⎪ 0.275 0.1659 2 0 0 0.275

⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥

⎪ (2)
⎪ Φ̄1 = ⎣−0.350⎦ , r12 = ⎣0.3920⎦ ⎣0 2 1⎦ ⎣−0.350⎦ = −0.9578




⎪ −0.675 0.4976 0 1 1 −0.675



⎪ ,⎡ ⎤ ⎡ ⎤,

⎪ , 0.275 ,

⎨ , 0.1659 ,
,⎢ ⎥ ⎢ ⎥,
r22 = ,⎣−0.350⎦ + 0.9578 · ⎣0.3920⎦, = 0.6380 (4)

⎪ , ,

⎪ , −0.675 0.4976 ,





⎪ ⎛⎡ ⎤ ⎡ ⎤⎞ ⎡ ⎤



⎪ 0.275 0.1659 0.6800

⎪ (2) 1 ⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥

⎪ Φ1 = ⎝⎣−0.350⎦ + 0.9578 · ⎣0.3920⎦⎠ = ⎣ 0.0399⎦

⎩ 0.6380
−0.675 0.4976 −0.3111

Then, at the end of the 1st iteration the following matrices are obtained
⎧  

⎪ 3.3162 −0.9578

⎪ R1 =



⎪ 0 0.6380

⎡ ⎤ (5)

⎪ 0.1659 0.6800

⎪ ⎢ ⎥

⎪ Φ1 = ⎣0.3920 0.0399⎦



0.4976 −0.3111
The reader should verify that Φ1 R1 = Φ̄1 . The corresponding matrices after the 2nd and 3rd iteration
become
⎧  

⎪ 1.3716 −0.1507

⎪ R2 =



⎪ 0 0.3392

⎡ ⎤ (6)

⎪ 0.1053 0.6944

⎪ ⎢ ⎥

⎪ Φ2 = ⎣0.3897 0.0492⎦



0.5191 −0.2311
206 Chapter A – Solutions to Exercises

⎧  

⎪ 1.3798 −0.0371

⎪ R3 =



⎪ 0 0.3374

⎡ ⎤ (7)

⎪ 0.0902 0.6972

⎪ ⎢ ⎥

⎪ Φ3 = ⎣0.3888 0.0496⎦



0.5237 −0.2086

Convergence of the eigenmodes with the indicated number of digits were achieved after 9 iterations,
where

⎧  

⎪ 1.3803 −0.0000

⎪ R14 =



⎪ 0 0.3372

⎡ ⎤ (8)

⎪ 0.0853 0.6981

⎪ ⎢ ⎥

⎪ Φ9 = ⎣0.3884 0.0486⎦



0.5251 −0.1997

Presuming that convergence has occurred after the 9th iteration the following eigenvalues are obtained
from (10-10) and (10-12)
   ⎫
λ1 0 T −1 0.7245 0.0000 ⎪ ⎪

Λ= = Φ9 KΦ9 = R∞ = ⎪
0 λ2 0.0000 2.9652 ⎪ ⎪




⎡ ⎤
0.0853 0.6981 (9)
(1) (2) ⎢ ⎥ ⎪

Φ= Φ Φ = Φ9 = ⎣0.3884 ⎪

0.0486⎦ ⎪



0.5251 −0.1997 ⎪

The solution (9) agrees with the corresponding solutions obtained in Exercises 6.2 and 6.6.

A.14 Exercise 7.3


Given the mass- and stiffness matrices defined in Exercise 4.2.

1. Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace iteration with the
start vector basis
⎡ ⎤
1 1
(1) (2) ⎢ ⎥
Φ0 = Φ0 Φ0 = ⎣1 0⎦
1 −1

SOLUTION:
A.14 Exercise 7.3 207

Question 1:

The matrix A becomes, cf. (5-4)

⎡ ⎤−1 ⎡ ⎤ ⎡ ⎤
6 −1 0 2 0 0 0.350 0.125 0.075
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
A = K−1 M = ⎣−1 4 −1⎦ ⎣0 2 1⎦ = ⎣0.100 0.750 0.450⎦ (1)
0 −1 2 0 1 1 0.050 0.875 0.725

Then, the 1st iterated vector basis becomes, cf. (7-4)


⎡ ⎤⎡ ⎤ ⎡ ⎤
0.350 0.125 0.075 1 1 0.550 0.275
(1) (2) ⎢ ⎥⎢ ⎥ ⎢ ⎥
Φ̄1 = Φ̄1 Φ̄1 = AΦ0 = ⎣0.100 0.750 0.450⎦ ⎣1 0⎦ = ⎣1.300 −0.350⎦ (2)
0.050 0.875 0.725 1 −1 1.650 −0.675

In order to perform the Rayleigh-Ritz analysis in the 1st subspace iteration the following projected mass
and stiffness matrices are calculated based on Φ̄1 , cf. (4-59), (4-60), (7-31)

⎡ ⎤T ⎡
⎤⎡ ⎤ ⎫
  ⎪

0.550 0.275 2 0 0 0.550 0.275 ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 10.998 −3.1763 ⎪

M̃1 = Φ̄T1 MΦ̄1 = ⎣1.300 −0.350⎦ ⎣0 2 1⎦ ⎣1.300 −0.350⎦ = ⎪

−3.1763 1.3244 ⎪

1.650 −0.675 0 1 1 1.650 −0.675 ⎪

⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎪
0.550 0.275 6 −1 0 0.550 0.275  ⎪


⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −1.850 ⎪

T
K̃1 = Φ̄1 KΦ̄1 = ⎣1.300 −0.350⎦ ⎣−1
8.300 ⎪

4 −1⎦ ⎣1.300 −0.350⎦ = ⎪
−1.850 1.575 ⎪⎪

1.650 −0.675 0 −1 2 1.650 −0.675
(3)

The corresponding eigenvalue problem (7-30) becomes

K̃1 Q1 = M̃1 Q1 R1 ⇒
     
8.300 −1.850 (1) (2) 10.998 −3.1763 (1) (2) ρ1,1 0
q1 q1 = q1 q1 ⇒
−1.850 1.575 −3.1763 1.3244 0 ρ2,1
   
0.7246 0 −0.2471 −0.4845
R1 = , Q1 = (4)
0 2.9752 0.1813 −1.5569

The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (7-34)

⎡ ⎤ ⎡ ⎤
0.550 0.275   −0.0861 −0.6947
⎢ ⎥ −0.2471 −0.4845 ⎢ ⎥
Φ1 = Φ̄1 Q1 = ⎣1.300 −0.350⎦ = ⎣−0.3848 −0.0850⎦ (5)
0.1813 −1.5569
1.650 −0.675 −0.5302 0.2514
208 Chapter A – Solutions to Exercises

Correspondingly, after the 2nd and 9th iteration steps the following matrices are calculated
   ⎫
0.7245 0 −0.7245 −0.0013 ⎪ ⎪

R2 = , Q2 = ⎪
0 2.9662 0.0004 −2.9673 ⎪ ⎪



⎡ ⎤ (6)
0.0854 0.6972 ⎪

⎢ ⎥ ⎪

Φ2 = ⎣0.3881 0.0603⎦ ⎪




0.5255 −0.2162

   ⎫
0.7245 0 −0.7245 −0.0000 ⎪


R9 = , Q9 = ⎪
0 2.9652 0.0000 −2.9652 ⎪




⎡ ⎤ (7)
−0.0853 −0.6981 ⎪

⎢ ⎥ ⎪

Φ9 = ⎣−0.3884 −0.0486⎦ ⎪




−0.5251 0.1997

The subspace iteration process converged with the indicated accuracy after 8 iterations.

Finally, it should be checked that th calculated eigenvalues are inded the lowest two by a Sturm sequence
or Gauss factorization check. The 2nd calculated eigenvalue becomes ρ2,9 = 2.9652. Then, let µ = 3.1
and perform a Gauss factorization of the matrix K − 3.1M, i.e.

⎡ ⎤
−0.2 −1.0 0
⎢ ⎥
K − 3.1M = ⎣−1.0 −2.2 −4.1⎦ =
0 −4.1 −1.1
⎡ ⎤⎡ ⎤⎡ ⎤
1 0 0 −0.2 0 0 1 5 0
⎢ ⎥⎢ ⎥⎢ ⎥
LDLT = ⎣5 1 0⎦ ⎣ 0 2.8 0 ⎦ ⎣0 1 −1.4643⎦ (8)
0 −1.4643 1 0 0 −7.1036 0 0 1

It follows that two components in the main diagonal of D are negative, from which is concluded that two
eigenvalues are smaller than µ = 3.1. In turn this means that the two eigensolutions obtained by (8) are
indeed the lowest two eigensolutions of the original system.

The solution (8) agrees with the corresponding solutions obtained in Exercises 6.3, 6.6 and 7.1.

A.15 Exercise 7.5


Given the mass- and stiffness matrices defined in Exercise 4.2.

1. Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope method).
A.15 Exercise 7.5 209

SOLUTION:

Question 1:

At first a calculation with µ = 2.5 is performed, which produces the following results

⎡ ⎤ ⎫
1.0 −1.0 0.0 ⎪

⎢ ⎥ ⎪

K − 2.5M = ⎣−1.0 −1.0 −3.5⎦ ⇒ ⎪



0.0 −3.5 −0.5 ⎪





⎧ (3)

⎪ P (2.5) = 1 , sign(P (3) (2.5)) = + ⎪ (1)

⎪ ⎪

⎨ P (2) (2.5) = 1.0 , sign(P (2) (2.5)) = + ⎪






⎪ P (1) (2.5) = 1.0 · (−1.0) − (−1.0)2 = −2.0 , sign(P (2.5)) = − ⎪
(1)


⎪ ⎪

⎩ (0) ⎭
P (2.5) = −0.5 · (−2.0) − (−3.5)2 · (1.0) = −11.25 , sign(P (2.5)) = −
(0)

Hence, the sign sequence of the Sturm sequence becomes + + −−. corresponding to the number of sign
changes nsign = 1 in the sequence. One eigenvalue is smaller than µ = 2.5.

Similar calculations are performed for µ = 3.5, 4.5, . . . , 9.5

µ = 3.5 : Sign sequence = + − + + ⇒ nsign =2


µ = 4.5 : Sign sequence = + − + + ⇒ nsign =2
µ = 5.5 : Sign sequence = + − + + ⇒ nsign =2
µ = 6.5 : Sign sequence = + − + + ⇒ nsign =2 (2)
µ = 7.5 : Sign sequence = + − + + ⇒ nsign =2
µ = 8.5 : Sign sequence = + − + + ⇒ nsign =2
µ = 9.5 : Sign sequence = + − + − ⇒ nsign =3

From this is concluded that the 3rd eigenvalue is placed somewhere in the interval 8.5 < λ3 < 9.5.

Next, similar calculations are performed for µ = 8.6, 8.7, . . . , 9.4

µ = 8.6 : Sign sequence = + − + + ⇒ nsign =2


µ = 8.7 : Sign sequence = + − + + ⇒ nsign =2
µ = 8.8 : Sign sequence = + − + + ⇒ nsign =2
µ = 8.9 : Sign sequence = + − + + ⇒ nsign =2
µ = 9.0 : Sign sequence = + − + + ⇒ nsign =2 (3)
µ = 9.1 : Sign sequence = + − + + ⇒ nsign =2
µ = 9.2 : Sign sequence = + − + + ⇒ nsign =2
µ = 9.3 : Sign sequence = + − + + ⇒ nsign =2
µ = 9.4 : Sign sequence = + − + − ⇒ nsign =3

From this is concluded that the 3rd eigenvalue is confined to the interval 9.3 < λ3 < 9.4.
210 Chapter A – Solutions to Exercises

Next, calculations are performed for µ = 9.31, 9.32, . . . , 9.39

µ = 9.31 : Sign sequence = + − + + ⇒ nsign =2


µ = 9.32 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.33 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.34 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.35 : Sign sequence = + − + − ⇒ nsign =3 (4)
µ = 9.36 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.37 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.38 : Sign sequence = + − + − ⇒ nsign =3
µ = 9.39 : Sign sequence = + − + − ⇒ nsign =3

From this is concluded that the 3rd eigenvalue is confined to the interval 9.31 < λ3 < 9.32.

Proceeding in this manner, it may be shown after totally 52 Sturm sequence calculations that the 3rd
eigenvalue is confined to the interval 9.31036 < λ3 < 9.31037. Each extra digit requires 9 calculations.

Setting λ3 9.310365, the linear equation (7-63) attains the form

K − 9.310365M Φ̄(3) = 0 ⇒
⎡ ⎤ ⎡ (3) ⎤ ⎡ ⎤
−12.6207 −1 0 Φ̄1 0
⎢ ⎥ ⎢ (3) ⎥ ⎢ ⎥
⎣ −1 −14.6207 −10.3104⎦ ⎣Φ̄2 ⎦ = ⎣0⎦ (5)
(3)
0 −10.3104 −7.3104 Φ̄3 0

(3)
Setting Φ̄1 = 1 the algorithm (7-64) now provides


(−12.6207) ⎪ ⎡ ⎤
(3)
Φ̄2 =− · 1 = −12.6207 ⎪

⎬ 1
(−1) ⎢ ⎥
⇒ Φ̄(3) = ⎣−12.6207⎦ (6)
(−1) (−14.6207) ⎪

· (−12.6207) = 1⎪
(3)
Φ̄3 =− ·1− ⎭ 17.7800
(−10.3104) (−10.3104)

Normalization to unit modal mass provides

⎡ ⎤
0.0729
⎢ ⎥
Φ(3) = ⎣−0.9202⎦ (7)
1.2978

The eigenvalue λ3 9.310365 and the corresponding eigenmode Φ(3) as given by (7) agree with the
corresponding results obtained in Excercises 6.3 and 6.6.

Anda mungkin juga menyukai