[A]{X} = {C}
file: matrix.ppt p. 1
0.3 x1 0.52 x2 x3 0.01
0.5 x1 x2 1.9 x3 0.67
0.1x1 0.3 x2 0.5 x3 0.44
A{ X } {C}
file: matrix.ppt p. 2
How to solve the system of linear
equation
• Graphical method
• Matrix inversion (Adjoint formula for A-1)
• Cramer’s rule
• Gauss elimination
• LU decomposition
• Gauss Seidel
file: matrix.ppt p. 3
Graphical Method
2 equations, 2 unknowns
a11 x1 a12 x2 c1 x2
a21 x1 a22 x2 c2
( x1, x2 )
a11 c1
x2 x1
a12 a12
a21 c2
x2 x1
a22 a22 x1
file: matrix.ppt p. 4
x2
3x1 2 x2 18
9
x1 2 x2 2
3
3
x2 x1 9
2 2
(4,3)
1
1
x2 x1 1 2
2 1
x1
Check: 3(4) + 2(3) = 12 + 6 = 18
file: matrix.ppt p. 5
Special Cases
• No solution x2
• Infinite solution
• Ill-conditioned ( x1, x2 )
x1
file: matrix.ppt p. 6
a) No solution - same slope f(x)
f(x) x
b) infinite solution
-1/2 x1 + x2 = 1
-x1 +2x2 = 2
x
c) ill conditioned f(x)
so close that the points of
intersection are difficult to
detect visually
x
file: matrix.ppt p. 7
Let’s consider how we know if the system is
ill-conditions. Start by considering systems
where the slopes are identical
• If the determinant is zero, the slopes are
identical
a 11 x 1 a 12 x 2 c1
a 21 x 1 a 22 x 2 c2
file: matrix.ppt p. 8
a11 c1
x2 x1
a12 a12
a21 c2
x2 x1
a22 a22
a11 a21
a12 a22 Isn’t this the determinant?
a11a22 a21a12 a11 a12
a11a22 a21a12 0 det A
a21 a22
file: matrix.ppt p. 9
If the determinant is zero the slopes are equal.
This can mean:
- no solution
- infinite number of solutions
file: matrix.ppt p. 10
HW 1:
37.2 4.7 x1 22
19.2 2.5 x 12
2
file: matrix.ppt p. 11
Matrix Inversion (Adjoint formula for A-1)
If A is any nxn matrix and Cij is the cofactor of
aij, then the matrix
C11 C12….… C1n
C21 C22….… C2n
: : :
: : :
Cn1 Cn2….… Cnn
file: matrix.ppt p. 14
Theorem
If A is an invertible matrix, then
1
A-1= det(A)adj(A)
file: matrix.ppt p. 15
Example 2
Use theorem 1 to find the inverse of the matrix A in the
previous example
1 0 1 6 0 -3
A= -1 3 0 Adj(A)= 2 1 -1
1 0 2 -3 0 3
1 0 1 1 0
det(A)= -1 3 0 -1 3 =1*3*2-1*3*1=3
1 0 2 1 0
file: matrix.ppt p. 16
6 0 -3
det(A)=3 Adj(A)= 2 1 -1
-3 0 3
And since
1
A-1= det(A)adj(A)
It follows:
2 0 -1
A-1 = 2/3 1/3 -1/3
-1 0 1
file: matrix.ppt p. 17
HW 2:
1 6 -3
A= -2 7 1
3 -1 4
Find:
(a) The matrix of cofactors.
(b) Adj(A)
(c) A-1 using the formula from theorem 1
file: matrix.ppt p. 18
Cramer’s Rule
• Not efficient for solving large numbers of
linear equations
• Useful for explaining some inherent
problems associated with solving linear
equations.
file: matrix.ppt p. 19
Cramer’s Rule
b1 a12 a13 a11 b1 a13
1 1
x1 b2 a22 a23 x2 a21 b2 a23
A A
b3 a32 a33 a31 b3 a33
file: matrix.ppt p. 20
Elimination of Unknowns
( algebraic approach)
a11x1 a12 x2 c1
a 21x1 a 22 x2 c2
file: matrix.ppt p. 21
Elimination of Unknowns
( algebraic approach)
a21a11 x1 a21a12 x2 a21c1 SUBTRACT
a21a11 x1 a11a22 x2 a11c2
a21c1 a11c2
x2
a12 a21 a22 a11 NOTE: same result as
Cramer’s Rule
a22c1 a12c2
x1
a11a22 a12 a21 file: matrix.ppt p. 22
Exercise:
Use of Cramer’s Rule
2 x1 3x2 5
x1 x2 5
2 3 x1 5
1 1 x 5
2
file: matrix.ppt p. 23
Gauss Elimination
• One of the earliest methods developed for
solving simultaneous equations
• Important algorithm in use today
• Involves combining equations in order to
eliminate unknowns
file: matrix.ppt p. 24
Blind (Naive) Gauss Elimination
• Technique for larger matrix
• Same principals of elimination
- manipulate equations to eliminate an unknown
from an equation
- Solve directly then back-substitute into one of
the original equations
file: matrix.ppt p. 25
Two Phases of Gauss Elimination
a11 a12 a13 | c1
a a22 a23 | c2
Forward
21 Elimination
a31 a32 a33 | c3
Note: the prime indicates
a11 a12 a13 | c1 the number of times the
0 '
a22 '
a23 | c2'
element has changed from
the original value.
0 0 a33 | c3
'' ''
file: matrix.ppt p. 26
Two Phases of Gauss Elimination
a11 a12 a13 | c1
0 '
a22 '
a23 | c2'
0 a33 | c3
'' ''
0 Back substitution
c3''
x3 ''
a33
x2
c '
2 a123 x3
'
a22
c1 a12 x2 a13 x3
x1
a11
file: matrix.ppt p. 27
Exercise:
2 x1 x2 3x3 1
4 x1 4 x2 7 x3 1
2 x1 5x2 9 x3 3
file: matrix.ppt p. 28
Pitfalls of the Elimination
Method
• Division by zero
• Round off errors
magnitude of the pivot element is small compared
to other elements
• Ill conditioned systems
file: matrix.ppt p. 29
Division by Zero
• When we normalize i.e. a12/a11 we need to
make sure we are not dividing by zero
• This may also happen if the coefficient is
very close to zero
2 x2 3x3 8
4 x1 6 x2 7 x3 3
2 x1 x2 6 x3 5
file: matrix.ppt p. 30
Techniques for Improving the
Solution
• Use of more significant figures
• Pivoting
• Scaling
a11 a12 a13 x1 b1
a
21
a22 a23 x2 b2
Ax b
a13 a23 a33 x3 b3
file: matrix.ppt p. 31
Use of more significant figures
file: matrix.ppt p. 32
Pivoting
file: matrix.ppt p. 33
Pivoting
• Partial pivoting
rows are switched so that the largest element is
the pivot element
• Complete pivoting
columns as well as rows are searched for the
largest element and switched
rarely used because switching columns changes
the order of the x’s adding unjustified
complexity to the computer program
file: matrix.ppt p. 34
Division by Zero - Solution
Pivoting has been developed
to partially avoid these problems
2 x2 3x3 8 4 x1 6 x2 7 x3 3
4 x1 6 x2 7 x3 3 2 x2 3 x3 8
2 x1 x2 6 x3 5 2 x1 x2 6 x3 5
file: matrix.ppt p. 35
Scaling
file: matrix.ppt p. 36
Scaling
x1 0.00 x2 100
. x1 x2 1
file: matrix.ppt p. 37
EXAMPLE
file: matrix.ppt p. 38
SOLUTION
3x2 13x3 50
2 x1 6x2 x3 45
4 x1 8x3 4
file: matrix.ppt p. 39
0 3 13 50 Pivot with equation
2 6 1 45 with largest an1
4 0 8 4
file: matrix.ppt p. 40
0 3 13 50
2 6 1 45
4 0 8 4
4 0 8 4
2 6 1 45
0 3 13 50
file: matrix.ppt p. 41
0 3 13 50
2 6 1 45
4 0 8 4
4 0 8 4
2 6 1 45
0 3 13 50
4 0 8 4 Begin developing
0 6 3 43 upper triangular matrix
0 3 13 50
file: matrix.ppt p. 42
4 0 8 4
0 6 3 43
0 3 13 50
4 0 8 4
0 6 3 43
0 0 14 .5 28.5
28.5 43 31.966
x3 1.966 x2 8149
.
14 .5 6
4 81.966
x1 2 .931
4
CHECK
...end of
3 8149
. 131.966 50 okay
problem
file: matrix.ppt p. 43
GAUSS-JORDAN
• Variation of Gauss elimination
• primary motive for introducing this method is that
it provides and simple and convenient method for
computing the matrix inverse.
• When an unknown is eliminated, it is eliminated
from all other equations, rather than just the
subsequent one
file: matrix.ppt p. 44
GAUSS-JORDAN
• All rows are normalized by dividing them by their
pivot elements
• Elimination step results in and identity matrix
rather than an UT matrix
file: matrix.ppt p. 45
Graphical depiction of Gauss-Jordan
a11 a12 a13 | c1 1 0 0 | c1n
a a ' '
a23 | c2' n
21 22
0 1 0 | c2
a31 a32 a33 | c3
'' ''
0 0 1 | c3 n
1 0 0 | c1n x1 c1n
n
0 1 0 | c2
x2 c2 n
0 0 1 | c3 n
x3 c3 n
file: matrix.ppt p. 46
Matrix Inversion
• [A] [A] -1 = [A]-1 [A] = I
• One application of the inverse is to solve
several systems differing only by {c}
[A]{x} = {c}
[A]-1[A] {x} = [A]-1{c}
[I]{x}={x}= [A]-1{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}
file: matrix.ppt p. 47
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion
A I
a11 a12 a13 1 0 0
a Note: the superscript
a22 a23 0 1 0 “-1” denotes that
21
a31 a32 a33 0 0 1 the original values
1 0 0 a111 a121 a131 have been converted
1 1 1 to the matrix inverse,
0 1 0 a21 a22 a23 not 1/a
ij
0 0 1 1
a31 1
a 32 a33
1
I A1
file: matrix.ppt p. 48
LU Decomposition Methods
• Elimination methods
Gauss elimination
Gauss Jordan
LU Decomposition Methods
file: matrix.ppt p. 49
Naive LU Decomposition
• [A]{x}={c}
• Suppose this can be rearranged as an upper
triangular matrix with 1’s on the diagonal
• [U]{x}={d}
• [A]{x}-{c}=0 [U]{x}-{d}=0
• Assume that a lower triangular matrix exists
that has the property
[L]{[U]{x}-{d}}= [A]{x}-{c}
file: matrix.ppt p. 50
Naive LU Decomposition
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• Then from the rules of matrix multiplication
• [L][U]=[A]
• [L]{d}={c}
• [L][U]=[A] is referred to as the LU
decomposition of [A]. After it is
accomplished, solutions can be obtained
very efficiently by a two-step substitution
procedure file: matrix.ppt p. 51
Consider how Gauss elimination can be
formulated as an LU decomposition
file: matrix.ppt p. 52
Although not as apparent, the matrix [L] is also
produced during the step. This can be readily
illustrated for a three-equation system
a11 a12 a13 x1 c1
a x c
a a 23 2 2
21 22
a31 a32 a33 x3 c3
file: matrix.ppt p. 53
a11 a12 a13 x1 c1
a
a 22 a 23 x 2 c2
21
a 31 a 32 a 33 x 3 c3
1 0 0
L f 21 1 0
f 31 f 32 1
file: matrix.ppt p. 55
[A] {x} = {c}
[U][L]
{d}
[U]{x}={d} {x}
file: matrix.ppt p. 56
Crout Decomposition
• Gauss elimination method involves two
major steps
forward elimination
back substitution
• Efforts in improvement focused on
development of improved elimination
methods
• One such method is Crout decomposition
file: matrix.ppt p. 57
Crout Decomposition
file: matrix.ppt p. 58
l11 0 0 1 u12 u13 a11 a12 a13
l l22 0 0 1 u23 a 21 a 22 a 23
21
l31 l32 l33 0 0 1 a 31 a 32 a 33
file: matrix.ppt p. 59
l11 0 0 1 u12 u13 a11 a12 a13
l l22 0 0 1 u23 a 21 a 22 a 23
21
l31 l32 l33 0 0 1 a 31 a 32 a 33
l11 a11
l11u12 a12
l11u13 a13
file: matrix.ppt p. 60
l11 a11
l11 0 0 1 u12 u13 a11 a12 a13
l11u12 a12 l
21
l22 0 0 1
u23 a 21 a 22
a 23
l31 l32 l33 0 0 1 a 31 a 32 a 33
l11u13 a13
a12
u12
l11
a13 Once the first row of [U] is established
u13
l11 the operation can be represented concisely
a1 j
u1 j for j 2 ,3,..... n
l11
file: matrix.ppt p. 61
Schematic
depicting
Crout
Decomposition
file: matrix.ppt p. 62
li 1 ail for i 1,2 ,....., n
a1 j
u1 j for j 2 ,3,..... n
l11
For j 2 ,3...... n 1
j 1
lij aij lik ukj for i j , j 1,..... n
k 1
j 1
a jk l ji uik
u jk i 1
for k j 1, j 2.... n
l jj
n 1
lnn a nn lnk ukn
k 1
file: matrix.ppt p. 63
The Substitution Step
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• [L][U]=[A]
• [L]{d}={c}
• [U]{x}={d}
• Recall our earlier graphical depiction of the
LU decomposition method
file: matrix.ppt p. 64
[A] {x} = {c}
[U][L]
{d}
[U]{x}={d} {x}
file: matrix.ppt p. 65
c1
d1
l11
i 1
ci lij d j
j 1
di for i 2 ,3,...... n
lii
Back substitution recall U x d
xn dn
n
xi di u x
j i 1
ij j for i n 1, n 2 ,..... n
file: matrix.ppt p. 66
Parameters used to quantify the
dimensions of a banded system.
BW BW = band width
HBW = half band width
HBW
Diagonal
file: matrix.ppt p. 67
Thomas Algorithm
• As with conventional LU decomposition methods, the
algorithm consist of three steps
decomposition
forward substitution
back substitution
• Want a scheme to reduce the large inefficient use of
storage involved with banded matrices
• Consider the following tridiagonal system
file: matrix.ppt p. 68
a11 a12 x1 c1
a a 22 a 23 x c
21 2 2
a 32 a 33 a 34 x3 c3
. .
. . .
. . . . .
. . . . .
a n 1, n 2 a n 1, n 1 a n 1, n . .
xn cn
an, n 1 a nn
file: matrix.ppt p. 69
f1 g1 0 f 1 g1
e f2 g2 e f g
2 2 2 2
e3 f3 g3 e3 f 3 g3
. . .
. . .
. . . . . .
. . . . . .
en1 f n1 gn1 . . .
en f n en f n 0
file: matrix.ppt p. 70
0 f 1 g1 0 b12 b13
e f g b b22 b23 Storage can
2 2 2 21
be accomplished
e3 f 3 g3 b31 b32 b33
. . . . by either storage
. .
as vector (e,f,g)
. . . . . . or as a compact
. . . . . . matrix [B]
. . . . . .
en f n 0 bn1 bn 2 0
file: matrix.ppt p. 73
c1 a12 x 2 a13 x 3 a1n x n
x1
a11
c2 a 21 x1 a 23 x 3 a 2 n x n
x2
a 22
c3 a 31 x1 a 32 x 2 a 3n x n
x3
a 33
cn a n1 x1 a n 3 x 2 a nn 1 x n 1
xn
a nn
file: matrix.ppt p. 74
Gauss-Seidel Method
• Start the solution process by guessing
values of x
• A simple way to obtain initial guesses is to
assume that they are all zero
• Calculate new values of xi starting with
x1 = c1/a11
• Progressively substitute through the
equations
• Repeat until tolerance is reached
file: matrix.ppt p. 75
x1 c1 a12 x2 a13 x3 / a11
x2 c2 a21 x1 a23 x3 / a22
x3 c3 a31 x1 a32 x2 / a33
file: matrix.ppt p. 76
EXAMPLE
Given the following augmented matrix,
complete one iteration of the Gauss
Seidel method.
2 3 1 2
4 1 2 2
3 2 1 1
file: matrix.ppt p. 77
Gauss-Seidel Method
convergence criterion
xij xij 1
a ,i j
100 s
xi
as in previous iterative procedures in finding the roots,
we consider the present and previous estimates.
u v
1 Criteria for convergence
x x where presented earlier
u v in class material
1
y y for nonlinear equations.
file: matrix.ppt p. 81
Convergence criteria for two linear
equations cont.
a21 a12
1 1
a22 a11
Extended to n equations:
aii aij where j 1, n excluding j i
file: matrix.ppt p. 82
Convergence criteria for two linear
equations cont.
aii aij where j 1, n excluding j i
file: matrix.ppt p. 83
x2 Review the concepts
of divergence and
convergence by graphically
illustrating Gauss-Seidel
for two linear equations
x1
file: matrix.ppt p. 84
v: 11x1 9 x2 99
x2
u: 11x1 13x2 286
x1
file: matrix.ppt p. 85
u: 11x1 13x2 286
x2
v: 11x1 9 x2 99
x1
file: matrix.ppt p. 87
Improvement of Convergence Using
Relaxation
xinew xinew 1 xiold
• if = 1unmodified
• if 0 < < 1 underrelaxation
nonconvergent systems may converge
hasten convergence by dampening out oscillations
• if 1< < 2 overrelaxation
extra weight is placed on the present value
assumption that new value is moving to the correct
solution by too slowly
file: matrix.ppt p. 88
Jacobi Iteration
• Iterative like Gauss Seidel
• Gauss-Seidel immediately uses the value of
xi in the next equation to predict x i+1
• Jacobi calculates all new values of xi’s to
calculate a set of new xi values
file: matrix.ppt p. 89
Graphical depiction of difference between Gauss-Seidel and Jacobi
FIRST ITERATION
x1 c1 a12 x2 a13x3 / a11 x1 c1 a12 x2 a13x3 / a11
x2 c2 a 21x1 a 23x3 / a 22 x2 c2 a 21x1 a 23x3 / a 22
x3 c3 a 31x1 a 32 x2 / a 33 x3 c3 a 31x1 a 32 x2 / a 33
SECOND ITERATION
x1 c1 a12 x2 a13x3 / a11 x1 c1 a12 x2 a13x3 / a11
x2 c2 a 21x1 a 23x3 / a 22 x2 c2 a 21x1 a 23x3 / a 22
x3 c3 a 31x1 a 32 x2 / a 33 x3 c3 a 31x1 a 32 x2 / a 33
file: matrix.ppt p. 90
EXAMPLE
Given the following augmented matrix, complete
one iteration of the Gauss Seidel method and the
Jacobi method.
2 3 1 2
4 1 2 2
3 2 1 1
file: matrix.ppt p. 91